accession_id
stringlengths 9
11
| pmid
stringlengths 1
8
| introduction
stringlengths 0
134k
| methods
stringlengths 0
208k
| results
stringlengths 0
357k
| discussion
stringlengths 0
357k
| conclusion
stringlengths 0
58.3k
| front
stringlengths 0
30.9k
| body
stringlengths 0
573k
| back
stringlengths 0
126k
| license
stringclasses 4
values | retracted
stringclasses 2
values | last_updated
stringlengths 19
19
| citation
stringlengths 14
94
| package_file
stringlengths 0
35
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
PMC10693699 | 38042773 | Background
In older clients’ home care, medication management has a key role in the daily tasks of home care professionals [ 1 , 2 ]. In addition, the professionals take care of older home care clients with personal assistance for eating and nursing treatments such as wound care [ 2 ]. Typically, older people use between one and five separate medications every day. Medication administration and ensuring that medicines are taken at the right time is an important and time-consuming task in daily medication management for home care professionals. It is noteworthy that for many older clients, securing medication is the only reason for the home visits [ 1 , 2 ]. At the same time, most of them also use self-care medication that can be bought without a prescription [ 3 , 4 ]. Therefore, they are involved in managing their own medication regimens in their everyday lives [ 4 , 5 ].
In home care, medication management is a process referring to the ordering, dispensing, reconstitution, administration, and monitoring of the effects of medications and medication education [ 6 , 7 ]. Home care professionals take care of medication management by ordering medications from a pharmacy. After that, the professionals dispense medications manually to the clients’ dosette for one week. It is important for home care clients to take medication at the right time to achieve the best possible benefit and, on the other hand, to avoid inappropriate side effects if the medicines are taken at the wrong time. Home care clients can sometimes take their medications by themselves or with the help of their families and relatives [ 8 , 9 ].
During daily home visits, home care professionals are obliged to monitor the effects of medications by following up the health condition of older home care clients using different measurement methods such as the measurement of blood pressure. In addition, home care clients should be asked about the effects and adverse effects of medication [ 10 ]. Safety of medication is a central element in home care. During home visits, home care professionals prevent polypharmacy, the over-prescription of medications, and medication errors to ensure home care clients’ safe living at home [ 11 ]. Medication education is an important task of home care professionals [ 12 ]. Home care professionals improve clients’ understanding and adherence to medications by educating them about the indications and common and severe adverse effects of medication [ 13 , 14 ], especially those receiving high-risk medications [ 14 ]. Different digital solutions have been developed to ensure medication safety while reducing the workload of home care professionals [ 8 , 15 , 16 ]. These include robots for medication management that remind home care clients to take their medicines at the proper time [ 16 ], thus relieving professionals of this task [ 8 ].
Nurses’ use of working time in hospitals has been studied from the viewpoint of the use of the robots delivering nursing care, such as auto-tracking systems to identify patients and robots taking care of patients’ hygiene. These studies found that the use of robots for different nursing treatments decreased the nurses’ use of working time required per patient when compared to manual care realized by the nurses [ 17 ].
There is a lack of studies related to home care professionals’ use of working time in older people’s home care, and especially in medication management. The use of robots for medication management by older people’s home care has been studied in the past, but these studies have mostly focused on testing different robots from the technical point of view [ 18 , 19 ]. The number of older people receiving home care is increasing [ 20 ] as are the number of clients receiving multiple medications that require assistance to administer. At the same time, the shortage of home care professionals is growing [ 21 ]. Therefore, it is necessary to investigate potential new possibilities, such as robots, to decrease the workload of homecare professionals and to guarantee safe medication management for older people living at home. How robots for medication management influence the use of working time of home care professionals is one area that needs more investigation.
Our study aimed to examine the effect of using a robot for medication management on home care professionals’ use of working time. The research hypothesis was that using a robot for medication management would decrease the professionals’ use of working time in home care. | Methods
Study design
A pragmatic non-randomized controlled clinical trial with three data collection points (at baseline, 1 month and 2 months) design [ 22 ] was carried out in Finland in 2021 (Fig. 1 ). It was designed to evaluate the effectiveness of an intervention under real-world conditions, provide a more accurate picture of how treatments work in practice, and help improve the quality and effectiveness of healthcare delivery [ 23 ]. The study was registered retrospectively (18/06/2023) on Trial registration (ClinicalTrials.gov Identifier: NCT05908604). The CONSORT Checklist was used for reporting the study [ 24 ].
Research environment
This study was conducted in home care in Eastern Finland. Home care there is organized with different service providers, such as municipal home care services [ 25 , 26 ], private and third sector services [ 27 , 28 ]. Municipal home care services consist of home visits, including support for everyday activities and self-care, and counselling on the services available [ 25 , 26 ]. The private sector services consist of assistance and home care services realized in clients’ homes with 24-hour assistance. The care and services provided by the third sector consist of day-to-day home care services in older clients’ homes [ 27 , 28 ]. In this article, we use the term home care professional, which includes public health nurses, practical nurses, and registered nurses who are working in home care. In Finland, public health nurses and registered nurses complete their degrees at a University of Applied Sciences. The degree complies with the European Qualifications Framework (EQF) [ 29 ] and is defined as level six education with 210 ECTS and 240 ECTS. Practical nurses have completed level-four training (EQF) consisting of 180 ECTS on vocational qualification [ 30 ].
In this study, the collaboration organization operating in a rural region has 800 home care professionals with 1,500 older home care clients. In Finland, home care focused on care provided within older clients’ homes and consisted of several home care professionals with several clients. In practice, there is no own-nurse system and therefore, the different professionals visit the same clients’ homes. In the year 2021, the organization purchased 110 robots for medication management. The use of robots for medication management in home care has been implemented in some areas within the region, with the future goal of providing access to all that are able to use them. The aim is to decrease home care professionals’ home visits and increase older home care clients’ safe medication management [ 31 ]. Clients using the robots pay a monthly service fee based on their income as part of their home care services. For example, low-income clients have been able to get home care services for free including the robot. Medications are dispensed ready to home care clients in the pharmacy by the pharmacist and delivered in single-dose bags for two weeks. One single-dose bag contains medications for one intake. In the older home care clients’ homes, the professionals dispense medication manually to the dosette once a week and then visit up to even five times per day in clients’ homes to administer the medications. In Finland, the goal of home care for older clients is to support them to live independently at home as long as possible and therefore, several home visits per day are appropriate and common [ 21 ]. When using a robot, the professionals put the bags in the robot for two weeks. The robot stands on a table, assists older home care clients with spoken instructions and sound signals by dispensing their medications at the right time. In addition, the robot displays written instructions on the device screen using indicator lights. In addition, if the client does not take the medication after reminders, the robot is locked and only home care professionals can open it. Moreover, the robot allows home care professionals to monitor older home care clients’ medication management. (Please see the picture and more information: https://www.evondos.com/ ) [ 32 ].
Study participants
The study participants were home care professionals including public health nurses, registered nurses, and practical nurses. Inclusion criteria for participation in this study were as follows: (1) voluntary participation in the study, (2) currently working in older people’s home care and (3) able to communicate in Finnish or English. One of the researchers (RT) received research permission from the participating organization. After that, the head of home care was contacted by one of the researchers (RT) to arrange meetings with the home care professionals. During the meetings, the researcher (RT) provided information about the study, including the information of inclusion criteria for study participants.
Study conduct
The study was conducted in 2021. Home care professionals assessed clients’ ability to use the robot for medication management. They proposed the robot for the clients who: (1) had regular tablet-form medication in use, (2) were able to use the robot independently, and (3) chose to use the robot voluntarily. Home care professionals didn’t propose the robot for the clients who: (1) had physical challenges, such as poor eyesight, and/or (2) had neurological challenges, such as memory disorders. The home care team leader made the final decision about receiving the robots. Robots were given to those clients in the home care region that implemented the use of the robots who the professionals thought would be able to use them as the intervention group (IG). The control group (CG) consisted of clients living in the home care region that did not implement the use of the robots for medication management but who the professionals thought would have been able to use it.
The IG’s baseline periods were individually defined. For example, before the client in the IG group received a robot, a five-day baseline period was carried out. The next day after the five-day baseline period, the client started to use the robot for medication management. A common baseline period for all CG group clients was defined. The home care professionals who participated in the study made home visits to homes of both IG and CG group clients. The data were collected from April to November in 2021.
The non-random allocation into the IG and the CG was made before the baseline period (five days). In this pragmatic trial, random allocation was not feasible due to ethical and logistical considerations and limitations in resources for conducting the trial. Blinding was also impossible given the nature of the intervention. Altogether 64 home care clients were allocated to the IG and 46 to the CG (Fig. 1 ). In both groups, pharmacies dispensed medications that were to be taken regularly packed in single-dose bags to be sufficient for two weeks. After that, the pharmacies delivered the bags to home care services. In the IG, home care professionals loaded them inside the robot; this enabled older home care clients to carry out medication management by themselves. In the CG, home care professionals dispensed medications manually to the dosette box during home visits.
Measures
The primary outcome measures were the total amount of home visits (frequency) during the entire intervention period and total working time of home care professionals used for medication management (in minutes) during the entire intervention period. The secondary outcome was the home care professionals’ working time used for medication management considering the number of visits per day. The data were collected using the Working Time Tracking Form developed by a professional team for this study [ 33 ]. The team included a senior lecturer in nursing with medication management competence, a head of older people’s home care, and a home care professional working in older people’s home care. The content of the Time Tracking Form was based on previous literature about medication management in home care [ 6 , 7 ]. The research protocol was planned by the authors before the study; it can be accessed from the corresponding authors on a reasonable request. In the Working Time Tracking Form, the home care professionals recorded the time in minutes they used for each phase of the process, i.e., ordering medications, dispensing medications to the dosette or dispensing medications to the robot, reconstituting medications, administering medications as tablets, administering medications via other routes, medication education, and monitoring the effects of medications. The Working Time Tracking Form was filled for all clients in both IG and CG at every home visit involving medication management during a 5 day period (Monday to Friday) at baseline and at follow-ups at one and two months.
The Time Tracking Form was evaluated by the expert panel and pre-tested by a pilot group in terms of content validity [ 33 ]. The expert panel consisted of seven participants: one head of home care, three home care professionals, and two senior lecturers in nursing with medication management competence, and one statistician. The panel members evaluated each item focusing on usability, but they did not find any need for revision. This helped evaluate how understandable and clear the Time Tracking Form was and how long it took to complete it [ 34 ]. The answers of the pre-test were not included in this study.
Data collection
Data were collected from the groups using the Working Time Tracking Form at three data collection points (baseline, months 1 and 2). In addition, home care clients’ gender and age were asked as background data. The researcher (RT) delivered the paper Time Tracking Forms to the home care contact person who gave to the home care professionals who, in turn, took the forms to clients’ homes and filled them during the time spent on medication management during each home visit. The data collection lasted five days, from Monday to Friday, when most medication management was realized. The home care professionals returned the forms to the home care contact person who then delivered them to the researcher.
Data analysis
The SPSS v.26 software was used for data analysis by an expert statistician. The characteristics of the sample were reported using descriptive statistics including frequencies, percentages, mean values, and standard deviation.
A summation variable based on the items of the medication management process was developed to form total time for medication management. The homogeneity of groups referring to clients was tested using two-sample t-test (age) and chi-squared test (gender). The t-test was used to compare the groups at each of the three timepoints for each phase of the process. P-values were corrected with Bonferroni correction to avoid Type I error in inference (Table 2 ). Analysis of Covariance was used to examine differences (Sidak multiple comparisons) between timepoints within both groups for the total time for medications, with the number of visits per day as the covariate (Table 3 ). The p-value ≤ 0.05 was regarded as statistically significant. There were no missing data.
Ethical considerations
The Ethics Committee of the University of Eastern Finland provided ethical approval (24/2017) and the participating healthcare organization granted the required research permission. The study followed ethical principles and all methods were performed in accordance with Ethical principles and the Declaration of Helsinki [ 35 ].
One of the researchers (RT) informed 352 home care professionals about the study. The information included the aim of the study and the data collection process. Furthermore, the home care-professionals were informed of voluntary participation and their right to discontinue their participation in the study at any time. Furthermore, the home-care professionals were informed about their anonymity to ensure privacy. Altogether 315 home care professionals agreed to participate in the study and during the study period, they were taking care of 110 home care clients. All participants signed the informed consent form when agreeing to participate. No compensation was paid to the participants and their employer for participation in the study. | Results
Demographic characteristics
The clients were mostly female in both groups (IG: n = 46, 71.9%; CG: n = 35, 76.1%, p = 0.621). Their mean age was 79.3 years, SD 6.4 (IG: 79.1 y, SD 6.6; CG: 79.6 y, SD 6.0, p = 0.718). There were no statistically significant differences between the groups. There was no attrition of samples and all recruited participants remained in the study until the end. In addition, no harm was reported of the intervention. The home care professionals’ demographic characteristics are presented in Table 1 .
Total amount of home visits and working time used for medication management
In the IG group, the total amount of the home visits for 64 clients in a 5-day period decreased by 89.4% from baseline at the 1-month follow up, from 878 visits to 93 visits, and by 92.5% at the 2-month follow up, from 878 visits to 66 visits (p < 0.001). In the CG group, the total amount of home visits for 46 clients in a 5-day period remained almost the same from baseline (670 visits) to the 1-month follow-up (668 visits) and the 2-month follow-up (668 visits).
Home care professionals’ total working time for medication management increased from baseline to the 1-month follow-up and the 2-months follow-up (mean 6.49, 16.37, 16.74, respectively) in the IG group. In the CG group, the total working time for medication management remained nearly the same from baseline to the 1-month follow-up and the 2-month follow-up (mean 5.15, 4.48, 5.13, respectively). After robot use, home care professionals didn’t use working time for dispensing medications to the dosette, reconstitution medications, administrating medications as tablets or administering medications using other routes. After implementing the robot, home care professionals’ tasks related to medications focused on ordering medications, dispensing medications into the robot, monitoring the effects of medications, and medication education in the IG group. (Table 2 ).
The total working time used for medication management considering the number of visits per day decreased from 54.2 min (95% CI 49.6–58.8) to 34.9 min (31.4–38.3), i.e., by slightly over 19 min (p < 0.001) in the IG group. During the follow-up, the total working time used for medication management considering the number of visits per day remained the same in the CG group. (Table 3 .)
| Discussion
This study aimed to examine the effect of using a robot for medication management on home care professionals’ use of working time. The study produced new knowledge about the effect of a digital solution on home care professionals’ use of working time for medication management. The total number of home visits decreased considerably in the IG group. This is because the home care professionals did not visit IG clients’ homes to administer medications, which is logical because the robot administrated the medications. However, the total time for medication management increased in the IG group. This is due to working time used for dispensing medications into the robot and medication education. It is noteworthy that after robot use, medication training included education about robot use for medication management. The clients might have needed education for robot use and not for medication management. More specifically, the home care professionals’ working time used for medication management concerning the number of visits per day decreased considerably. Thus, our results support our hypothesis that using a robot for medication management decreases the professionals’ use of working time in home care.
Without robot use for medication management, the medication process could be a time-consuming task in daily home care, as reported in previous studies [ 8 , 9 ]. Furthermore, older home care clients have several medications [ 4 , 5 ] and therefore, medications should be given at the right time [ 9 , 10 ]. The growing number of older people increases the need for daily home care services and increases the cost of healthcare. In addition, this phenomenon causes a shortage of labor, including home care professionals [ 35 – 37 ]. Therefore, using a robot decreases professionals’ time for medication management and the time saved can be assigned to care and services that cannot be replaced otherwise. Our study showed that the use of a robot for medication management decreased considerably the number of home care professionals’ home visits. This means that the time savings have obvious benefits that should be quantified for the organization, such as cost effectiveness gains, for example average salary and work schedule.
Decreasing home visits due to the use of a robot for medication management raises ethical questions. For instance, robot use in home care can increase older home care clients’ loneliness [ 39 ]. Loneliness among older home care clients has been shown to have health-related, psychological, and social consequences including social isolation, mental illness and even nutritional risks [ 38 , 39 ]. However, previous studies [ 3 , 40 , 41 ] have reported that robots for medication management improve older home care clients’ everyday life by increasing their autonomy. Moreover, clients emphasized that using the robot increased their safe medication management [ 42 ].
Based on previous studies, digital solutions for nursing care have influenced nurses’ working time [ 16 , 17 ]. In our study, the robot for medication management influenced the working time of the home care professionals. The robot decreased especially the time used by home care professionals when administering medications to older home care clients. Most medication errors typically occur during the medication administration stage. Wrong medication, wrong dose, and wrong timing are the usual types of errors in home care that endanger patient safety [ 6 , 7 ]. Therefore, using a robot for medication management supports older people’s independent living in their homes and enables the safety of medication management in collaboration with older home care clients, their relatives and home care professionals.
The results indicated that with the use of the robot for medication management, the time used for medication education increased. It occurred because after robot use, medication training included education about robot use for medication management. However, one essential question concerns the lack of medication education. Based on our results, in the CG group medication education decreased from baseline to 1-month and then increased from 1-month to 2-month effecting the total working time used for medication management. It might be that during the 1-month with a 5-day period, the clients didn’t have need for medication education or no new medications were prescribed for clients in the CG group and therefore, the time spent on medication education was lower.
Among older home care clients, it is evident that they have various chronic diseases [ 43 ] with multiple medication regimens. Therefore, home care professionals should pay more attention to medication education including the indications for medication, the schedule for taking medications, as well as common and severe adverse effects [ 13 ]. Medication-related errors, especially in clients with high-risk medications [ 14 ], cause serious consequences for older home care clients’ health and can lead to readmission to long-term healthcare settings, hospitalization and even death [ 3 ].
Limitations and strengths of the study
Due to pragmatic reasons, we were unable to perform random allocation in our study. While random allocation is considered the gold standard for clinical trial design, we had to balance the need for rigorous scientific methodology with the practical realities of conducting a study in a real-world clinical setting. As a result, we had to use a non-randomized allocation method to assign participants to different groups, which causes limitations in terms of internal validity and generalizability of the findings. However, we believe that the pragmatic approach we used allowed us to better reflect the clinical practice and optimize the study’s feasibility and acceptability, thus providing insights that inform clinical practice and policy decisions. The Working Time Tracking Form was designed and used for the first time in this study. The home care professionals described the form as easy to use, but medication education in the IG at the follow-up assessment at 1 and 2 months might also include robot use guidance. For further use, it is suggested to revise this form to separate the medication education and robot use guidance. In addition, it is noteworthy that the filling of the Working Time Tracking Form affected the workload of the home care professionals, but based on their evaluation, it was on average 2 min per visit.
Moreover, the data were collected with a paper form in older home care clients’ homes. This avoided the risk of high attrition rates among home care professionals posed by electronic data collection after the home visits [ 44 ]. In addition, a systematic and researcher-informed data collection in collaboration with a contact person in home care was used to minimize the drop-out rate of home care professionals and thus, home care clients. Our results represent the implementation of the robot for medication management in the home care in one city. Thus, only preliminary conclusions and cautious generalizations can be made. However, to our knowledge, ours is the first study evaluating the effect of robot use on home care professionals’ use of working time from the perspective of the medication management process. | Conclusions
Using a robot for medication management had a decreasing effect on home care professionals’ use of working time. Consequently, it can lead to better health outcomes, improved satisfaction among older clients, and a reduction in readmissions to healthcare settings. Additionally, it can also reduce home care professionals’ workload and stress levels and enhance their work efficiency, allowing them to complete their tasks with fewer errors, which leads to improved patient safety.
The knowledge produced in this study has implications for practice and research. Robot for medication management should be widely implemented based on older home care clients’ and professionals’ needs to meet the challenge of a growing number of older people in need of home care. Future research is needed to evaluate the cost-effectiveness when using a robot for medication management in older people’s home care. In addition, research focusing on medication incidents and medication adherence is needed to improve robot-based medication management. | Background
Medication management has a key role in the daily tasks of home care professionals delivered to older clients in home care. The aim of this study was to examine the effect of using a robot for medication management on home care professionals ́ use of working time.
Methods
A pragmatic non-randomized controlled clinical trial was conducted. The participants were home care professionals who carried out home care clients’ medication management. Home care clients were allocated into intervention groups (IG) and control groups (CG) (n = 64 and 46, respectively) based on whether or not they received the robot. Data were collected using the Working Time Tracking Form prior to and 1 and 2 months after introducing the intervention. The t-test was used to compare the groups at each three timepoints. Analysis of Covariance was used to examine the groups’ differences for the total time for medications as the number of visits per day as the covariate.
Results
With robot use, the total amount of home visits decreased by 89.4% and 92.4% after 1 and 2 months of intervention use, respectively, compared to pre-intervention (p < 0.001). The total working time used for medication management considering the number of visits per day decreased from 54.2 min (95% CI 49.6–58.8) to 34.9 min (31.4–38.3), i.e., by slightly over 19 min (p < 0.001) in the IG group. During the follow-up, the total working time used for medication management considering the number of visits per day remained the same in the CG group.
Conclusion
Using a robot for medication management had a notable effect on decreasing the use of working time of home care professionals. For health services, decreased use of working time for medication management means that the time saved can be assigned to services that cannot be replaced otherwise. More digital solutions should be developed based on home care clients’ and professionals’ needs to meet the challenge of the growing number of older people in need of home care and ensure their safety.
Trial registration
ClinicalTrials.gov Identifier: NCT05908604 retrospectively registered (18/06/2023).
Keywords | Acknowledgements
The authors would like to sincerely thank all home care professionals who participated in this study.
Author Contributions
S.K.-U., M.V., M.K. and R.T. designed the study and collected the data. S.K.-U., M.V., J.K., M.K. and R.T. analyzed the data and wrote the manuscript.
Funding
This study was financially supported by the Finnish Work Environment Fund.
Data Availability
The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.
Declarations
Ethics approval and consent to participate
The study followed the ethical principles, and all methods were performed in accordance with Ethical principles and Declaration of Helsinki [ 35 ]. The Ethics Committee of the University of Eastern Finland (24/2017) provided ethical approval and the participating healthcare organization granted the required research permission. All participants received verbal and written information about the study and were told that participation was voluntary and that they could withdraw from the study at any time. After being provided with information about the study, the participants were given time to consider whether they wanted to take part. Finally, written informed consent was obtained from all participants. No compensation was paid to the participants and their employer for participation in the study.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
List of abbreviations
Control group
Confidence Interval
Consolidated Standards of Reporting Trials
The European Credit Transfer and Accumulation System
the European Qualifications Framework
Frequency
Intervention group
Standard deviation
Statistical Package for the Social Sciences | CC BY | no | 2024-01-15 23:35:10 | BMC Health Serv Res. 2023 Dec 2; 23:1344 | oa_package/c6/36/PMC10693699.tar.gz |
|
PMC10695030 | 38055528 | INTRODUCTION
In the human species, the genetic determination of gender is chromosomal, homogametic for the female and heterogametic for the male, which conditions gonadal development, with some possible and infrequent variations. In a complex process mediated by the consecutive secretion of gender hormones (fundamentally androgens) in a proportional, meticulous and sensitive way in each gender and at each moment of prenatal development, secondary differentiations are produced. This impregnation and sexual differentiation affect all cells, tissues and organs, both structurally and biochemically. It is very important to note that the brain is also affected in the sexual process, and although at the beginning of the 20th century the endocrinologist and sexologist Dr. Gregorio Marañón said that “The brain is the most important sexual organ of the human being”, it is only very recently, mainly due to the contributions of neuroimaging and genetic analysis techniques, among others, that solid knowledge has been acquired about the sexuation process and more specifically on brain sexuation and, consequently, on the perception of differences in sexual identity. As with gonadal differentiation, in the internal and external genitalia, in the hormonal characteristics and even secondary sexual characteristics, there are cases that deviate from the majority process or statistical normality, it also seems that brain differentiation occurs, and this has consequences for self-perception of identity, which may differ from the biological gender ( 1 ) .
One of the results of the whole process of sexual differentiation is the acquisition of sexual identity, ego sexuation, that is, how everyone perceives themselves as a man or a woman. For most people, ego sexuation coincides with the sex assigned or designated by their environment at birth, allosexuality, and which is based fundamentally on the observation of the external genitalia. After this initial sexing, the whole gender construction process begins, male or female, starting with the attribution of the name, treatment in creation and education, etc. ( 2 ) .
If the two perceptions, ego sexuation and allosexuation, coincided, it would be a situation of cissexuality or cisgender, while if they did not coincide, it would be transsexuality or transgender ( 2 ) . From here, the concept “trans” will be used, since it is quite complex and goes beyond the objectives of this article - to explain the difference between the concepts of transsexuality and transgender.
Both in mythology and in classical cultures, as well as in various ethnic groups and current cultures, sexual identities emerged that could be similar to those that in our society today is known as transsexuals, people who do not accept their biological gender or their assigned gender, or even those who do not fit into the male/female dichotomous categories. These people are known in various fields as the third gender: hijra in Pakistan, khanith in Oman, fa’afafine in Samoa, muxes in Mexico...; they are generally not considered a problem and are even seen socially as a positive value.
It is not easy to know the magnitude of the trans phenomenon because studies are few and those that exist are partial and generally cannot be compared because they are based on different registers, definitions and methodologies. According to a meta-analysis carried out in 2015, based on 250 studies carried out in 9 countries, the overall prevalence of transsexuality was 4.6 in 100000 people: 6.8 for trans women and 2.6 for trans men ( 3 ) . Over the past 50 years, there has been an increase in prevalence. These data are related to people who resort to health systems and, therefore, it can be expected that the social reality is greater, although the value is unknown. This increase in prevalence that seems to be accelerating in recent times raises a debate about its interpretation, some argue that, due to the current greater social permissiveness, cases arise that some time ago did not dare, while other authors are very critical and attribute to other factors such as personal dissatisfactions, personality crises, typical of adolescence, and even a certain social fashion induced by the Queer movement ( 4 ) .
There are two aspects in which a high level of consensus is observed. The first one is that trans people tend to suffer from a worse quality of life, physical and mental health than the general population, related to the situation of vulnerability they suffer. The second one is that they often maintain a relationship with the health systems, either because of health problems or because of the endocrinological and surgical techniques that health care offers them to adjust their body appearance to the perceived gender pattern and the corresponding gender ( 5 - 6 ) .
Currents of thought are also emerging within the trans movement that disagree with the general essentialist approach and that question the relevance of having to redesign or transform the body of those who present incongruence with the perception of their gender, attributing this incongruence to the social sclerosis of the concepts of sex-gender ( 7 ) .
The health needs of transgender people and the care that health systems and professionals provide or deny them mean that several dilemmas are constantly raised and open up debates ranging from ideological to operational and even legislative, and those debates are not alien to nursing as a care profession, accompaniment and defense of people who suffer or need help.
As a guideline for the professional positioning of nurses, an attempt will be made to ethically analyze some of the dilemmas that arise when caring for transgender people, using the model of four principles established by Beauchamp and Childress in Principles of Biomedical Ethics ( 8 ) , which was later qualified by Professor Diego Gracia, hierarchizing these principles ( 9 ) . | METHODS
Reflective study in which the most common dilemmas encountered when carrying out a narrative bibliographic review on nursing care for transgender people are analyzed. The analysis will be carried out around the four bioethical principles that constitute the so-called “Hierarchical Principle-ism”, which are: the two principles of minimum requirement that must be fulfilled in all actions to be considered ethical; non-maleficence and justice; while the two principles considered as maximum or of bioethical excellence are: beneficence and autonomy. | RESULTS
As indicated in article 1.2 of the Code of Ethics for Nurses of the International Council of Nursing (ICN) (2021): “Nurses promote an environment in which everyone recognizes and respects the human rights, values, customs, religious and spiritual beliefs of the person, families and communities” ( 10 ) . It is within this ethical framework that the nursing care required in the care of transgender people will be framed.
Non-maleficence
This principle, essential for any action to be considered ethical, stems from the Latin aphorism primum non nocere (first do no harm). In the care of trans people, it has a transcendent application both at the care level and at the community preventive level.
Care must be oriented to avoid the damage that any type of discrimination can cause and that can range from stereotyping these people when they declare themselves trans, making assumptions about their sexual practices or ways of life, to derogatory reactions or disrespect to the name or pronoun with which they are identified. Likewise, sanitary facilities must be adapted to ensure the necessary privacy and dignified treatment.
Transgender people, who so need it, should receive the most appropriate and safe hormone treatments and forms of administration, as indicated by the World Health Organization in its Guidelines on self-care interventions for health and well-being (2022 revision).
No less important are the community health education interventions in schools and families aimed at accepting sexual diversity and, within it, the underage trans. Acceptance is the first necessary step to protect and avoid situations of marginalization or mistreatment, which are the direct cause of the so-called gender dysphoria. Transcendent for the acceptance is the early detection of underage trans within the family and that they understand the nature of the phenomenon so that they can deal with it in a positive way.
Justice
This principle is based on providing more care to those who need it the most. The health inequities of this group are recognized, related to the laws and rights recognized in different countries. These inequities occur both in those who start the transition process and in the collective in general, considering the health conditions to which they are forced on multiple occasions, such as socioeconomic and labor situations, mental health and marginality.
It should be part of nurses’ advocacy to encourage public health initiatives involving transgender people, ensure the competence of health professionals for this group and monitor compliance with non-discriminatory policies both in health systems and in society in general.
When carrying out an economic assessment of the transition process, not only the direct costs of health care, basically surgical and hormonal, should be considered, but a rigorous Assessment of Health Technologies should be carried out in which direct, indirect, health and non-sanitary costs are considered, as well as those of difficult tangibility.
Beneficence
As health professionals and if they intend to provide excellent care, they have an ethical obligation to do good if it does not imply a risk to their way of life or coexistence.
Both the process of adaptation and acceptance by families in the case of underage people, and the process of transition or gender reassignment in adults involve a complex and sometimes tortuous and labyrinthine journey.
It is beneficial to monitor and advise families in the community, promoting support and self-help services and networks, in addition to bringing the population closer to knowledge of the trans phenomenon based on science so that citizens can understand it and, consequently, accept and respect it.
People who decide to go through a process of harmonizing their sexual characteristics must go through an intricate journey through the different services of the health systems, simplify it and implement strategies aimed at the specific health care of this population and offer follow-up through figures as a case manager nurse or similar also supposes a beneficial action.
Autonomy
This principle of bioethical excellence is also transcendent, it means facilitating people to make decisions according to their way of being in the world, it implies allowing the right to recognition of the trans condition that is neither volitional, nor apprehensible, nor teachable, nor ephemeral, nor capricious. Therefore, conditions and support are necessary, but also understandable and true information, especially when decisions will condition actions that are difficult or impossible to reverse.
Nurses who capture situations of need, in their role as advocates for patients, should have the responsibility of putting those who need it in contact with specialized professionals or with solvent organizations that provide this information. According to article 1.3 of the ICN code of ethics ( 10 ) , “nurses ensure that the person and the family receive understandable, accurate, sufficient and timely information, in a way that is appropriate to the cultural, linguistic, cognitive and physical needs of the patient, in addition to their psychological state, on which to base their consent to care and corresponding treatment”.
In the case of underage people, although the perception of sexuality corresponds to the individual, autonomy in decision-making corresponds to their legal representatives. It will be necessary to guide the family unit towards early detection, follow up by providing references, contribute to the authentication of the story, rule out interference, respect the process and even accept evanescence situations, if they occur, and encourage prudence, delaying irreversible interventions as much as possible, that is, to provide underage people with as much time as possible for personal maturation so that they can experience and establish the relationship they deem most appropriate with their sexual identity, their body and their gender role, protecting them and always avoiding situations of suffering or marginalization. | EDITOR IN CHIEF: Antonio José de Almeida Filho
ASSOCIATE EDITOR: Hugo Fernandes
ABSTRACT
Objectives:
to discuss ethical aspects in nursing care for transgender people.
Methods:
reflective study based on the dilemmas that emerges in nursing care for transgender people. The report was structured around the four bioethical principles.
Results:
health care for trans people is complex, transversal to many devices and specialties and longitudinal in time, that is why it requires coordinated action. There is an ethical framework in which the nursing care must be observed in the care of this group.
Final Considerations:
the nurse as a health worker can assume several general lines in the care of transgender patients. So, complementary training should be provided not only to professionals, but also to students of nursing and other health sciences.
RESUMEN
Objetivos:
debatir sobre aspectos éticos en la atención de enfermería a personas transgénero.
Métodos:
estudio reflexivo fundamentado sobre los dilemas que se plantean en los cuidados de enfermería a personas transgénero. El relato se ha estructurado en torno a los cuatro principios bioéticos.
Resultados:
la atención sanitaria a las personas trans es compleja, transversal a muchos dispositivos y especialidades y longitudinal en el tiempo por lo que precisa de la actuación coordinada. Existe un marco ético en el que se encuadran los cuidados de enfermería que se precisan en la atención a este colectivo.
Consideraciones Finales:
la enfermera como agente de salud puede asumir diversas líneas generales en la atención a pacientes transgénero. Para ello, se debe brindar formación adicional no solo a los profesionales, también a los estudiantes de enfermería y de las demás ciencias de la salud.
RESUMO
Objetivos:
discutir aspectos éticos na assistência de enfermagem às pessoas transgênero.
Métodos:
estudo reflexivo a partir dos dilemas que surgem no cuidado de enfermagem às pessoas transgênero. O relato foi estruturado em torno dos quatro princípios bioéticos.
Resultados:
a atenção à saúde de pessoas trans é complexa, transversal a muitos dispositivos e especialidades e longitudinal no tempo, por isso requer ação coordenada. Existe um referencial ético no qual se enquadram os cuidados de enfermagem que devem ser observados no atendimento a esse grupo.
Considerações Finais:
o enfermeiro como agente de saúde pode assumir diversas linhas gerais no atendimento a pacientes transgênero. Para tal, deve ser proporcionada formação complementar não só aos profissionais, mas também aos estudantes de enfermagem e outras ciências da saúde.
Descriptors:
Descritores:
Descriptores: | OBJECTIVES
To discuss ethical aspects in nursing care for transgender people.
FINAL CONSIDERATIONS
Currently, the increase in the prevalence of trans people is worrying, contemplating several hypotheses and requiring more consistent studies. Therefore, especially in childhood and adolescence, the processes must be respected, but acting with prudence.
Hormonal and surgical techniques for sex reassignment have advanced a lot, but they are not free of risks and undesirable side effects, especially regarding the genitals. For this reason and for the options of living fully without assuming the normative conditions of gender in force, the debate is increasingly open and the need to completely transform and, in all cases, the sexual characteristics is questioned. Metaphorically, one can question whether people who experience an inconsistency between their self-perception of gender and their body characteristics should admit that they “were born in the wrong body” or that “they must conquer the body based on their identity” ( 4 , 7 ) .
Nurses as health workers, and from their role as advocates for users of the health system, can assume several general lines of action in the face of the trans phenomenon:
Monitoring and facilitating those who carry out the transition process in their passage through the complex labyrinth of the health system.
Accompanying and advising families and the educational community with doubts about the identity of their children, out of respect, but also out of prudence.
Dissemination of scientific knowledge about the trans phenomenon in society to promote respect for sexual diversity and their rights.
For all these reasons, complementary training should be provided to professionals and students of nursing and other health sciences, with the involvement of both university centers, as well as collegiate entities and professional scientific societies. | CC BY | no | 2024-01-15 23:35:08 | Rev Bras Enferm.; 76(Suppl 3):e20220797 | oa_package/ed/ed/PMC10695030.tar.gz |
|||
PMC10726224 | 38116399 | Guest Editor of the Roadmap.
Abstract
Optical sensors and sensing technologies are playing a more and more important role in our modern world. From micro-probes to large devices used in such diverse areas like medical diagnosis, defence, monitoring of industrial and environmental conditions, optics can be used in a variety of ways to achieve compact, low cost, stand-off sensing with extreme sensitivity and selectivity. Actually, the challenges to the design and functioning of an optical sensor for a particular application requires intimate knowledge of the optical, material, and environmental properties that can affect its performance. This roadmap on optical sensors addresses different technologies and application areas. It is constituted by twelve contributions authored by world-leading experts, providing insight into the current state-of-the-art and the challenges their respective fields face. Two articles address the area of optical fibre sensors, encompassing both conventional and specialty optical fibres. Several other articles are dedicated to laser-based sensors, micro- and nano-engineered sensors, whispering-gallery mode and plasmonic sensors. The use of optical sensors in chemical, biological and biomedical areas is discussed in some other papers. Different approaches required to satisfy applications at visible, infrared and THz spectral regions are also discussed. | Optical fibre sensors
Gilberto Brambilla 1 and Luc Thévenaz 2
1 University of Southampton, United Kingdom
2 EPFL, Switzerland
Status
Optical fibre sensors (OFSs) are devices which exploit optical fibres to monitor physical quantities and provide an output in the electronic domain. In common with other optical sensors, OFSs provide immunity to electromagnetic interference, capability to work in harsh environments and top performance. In addition to other optical sensors, OFSs allow multiplexing to a level which makes them cost competitive with respect to other, non-optical, types of sensors. The global OFS market has continuously grown in the last three decades and was estimated to be in the region of USD 2.7–2.9 B in 2020–2021. It includes a myriad of sensing applications ranging from chemical to physical, from gyroscopes to distributed sensors.
Most of OFSs consist of four components: a transducer, which converts the physical measurand into an optical signal; a detection system, which converts the optical signal into the electronic domain; a waveguide, which delivers light from the source to the transducer and then to the detection system; and a source, which generates the light that will be turned into the optical signal by the transducer. OFSs can be broadly classified in intrinsic and extrinsic sensors by the role that the fibre has in the system: while in the former the fibre itself holds the role of both transducer and waveguide, in the latter it holds solely the role of waveguide. Successful examples of these type of sensors include endoscopes and fiberized optical coherence tomography (OCT). A further classification discerns intrinsic OFSs into distributed, quasi-distributed and point sensors according whether transducing occurs continuously along the whole fibre length, in selected discrete points or in a single point. The most successful distributed sensors include temperature and vibrations, and at a minor extend strain. Fibre Bragg gratings (FBGs) are the most prominent member of quasi-distributed sensors because of their capability to multiplex and measure temperature and strain in multiple locations, especially in the marine environment. Finally, the group of point sensors is very diverse and contains sensors based on various types of interferometers (such as hydrophones and gyroscopes), and sensors relying on the change of the polarization state (current sensors) or of the complex refractive index (bio and chemical sensors).
Overall, the global market for intrinsic OFS is expected to grow strongly over this decade and reach a value of USD 7.2 B by 2030, with an average compound annual growth rate (CAGR) of 11.5% [ 1 ]. Although the high-value oil and gas industry currently represent half of the market for intrinsic OFSs and will continue to be a major driving force in the next decade, significant thrust should also rise from homeland security (border control), civil engineering (structural health monitoring), power and utility (power cables monitoring, nuclear fusion), industrial (process monitoring), and defence/aerospace (gyro).
Distributed sensors
Optical fibres offer the unique property to realise fully distributed sensing, providing a continuous and independent information about an environmental quantity at any position along the fibre [ 2 ]. In its most direct implementation, a light pulse is launched into the long sensing fibre and is then continuously back-reflected to the fibre input end through natural scattering processes, like in a radar system. Analysing this back-reflected light (spectrum and amplitude) informs about the quantity to be measured and observing its time response translates into a position-resolved information, considering the finite speed of light and the specific time required by light for returning from a given point along the fibre.
This way such a sensor can substitute for thousands of point sensors, the sensing element having the two functions of converting the measured quantity into a modulation of the signal and of transmitting the signal before and after modulation to the processing unit. It is evident that the optical fibre is an excellent candidate to be such a sensing element, the main difficulty being to identify the right phenomenon activated by the measured quantity that will give the proper modulation on the signal. For this purpose, the three natural scattering processes observed in glass are exploited: historically the first distributed fibre sensor was realised in the late 1980’s using Raman scattering that shows a scattering cross-section significantly dependent on temperature. Such Raman distributed temperature sensors are still implemented to survey the temperature profile of large structures such as hot spots along energy cables and fire detection in tunnels.
A few years later, Brillouin scattering was proposed to realise distributed sensing, showing a spectral signature dependent on temperature and strain. This type of sensor is more sensitive and shows an extended distance range and a better spatial resolution. Hence, it is widely employed in energy industry, infrastructure and environment monitoring, homeland security, etc...
More recently, Rayleigh scattering has been proposed as a more advanced technique, in which the information is obtained by comparing the shape of backscattered traces induced by the random interference of coherent light scattered at inhomogeneity centres inside the fibre core. This results in an interferometric sensitivity that enhances the response by some three orders of magnitude. Such systems are successfully used to sensitively detect vibrations, with applications to intrusion detection and seismic monitoring.
The performance of such sensors is globally measured by the number of resolved points—the distance range divided by the spatial resolution—which is in turn scaled by the signal-to-noise ratio (SNR) of the detected signal. Considering the intrinsic properties of optical fibres, 100 000 points can be resolved in state-of-the-art systems, over a maximum distance range of 100 km limited by natural optical losses. Research efforts manage to improve these figures and to speed up the acquisition time which still takes several seconds.
Multimode and multicore fibres have gained increase attraction for telecom applications and it is reasonable to expect further impact in sensing [ 3 ]. Increasing computing power will also facilitate the interpretation of backscattered signal through fast postprocessing of encoded incident signal [ 4 ] and further use of artificial intelligence (AI). The range extension will continue, beyond current 170 km [ 5 ], allowing for further deployment in geophysics [ 6 ], where its range capabilities and high resolution provide a competing edge with respect to other types of sensors.
Quasi distributed sensors
FBGs represent the vast majority of the quasi-distributed OFS market. FBGs have consistently attracted strong interest because of their long lifetime, high accuracy, compact size, fast response time, and, above all, multiplexing; they have found significant applications in the oil and gas market, in structural health monitoring for aerospace and civil engineering, and in the power industry. The global FBG sensor market size is estimated to be USD 0.4 B in 2022, with a forecasted CAGR of 7.4% up to 2028. This will benefit from the larger market of FBG filters for the telecom market, which is expected to growth at a CAGR greater than 20% over the same period.
There are two major challenges that FBGs will continue to face in the next decade: the relatively high cost of each single grating, which makes the whole system uncompetitive for applications that replace transductors frequently, and the cost of the detection system, significantly higher than that used for competing electronic sensors. Draw tower gratings (DTGs), used in conjunction with OTDR, have emerged as an alternative to FBG wavelength multiplexing, providing a cheaper alternative to FBGs when cheap disposable transducers are required.
Laser inscribed low-loss backscatterers [ 7 ] might in the next decade become a competitive technology to DTGs: low-loss backscattering elements are easier to make and cheaper than FBGs or DTGs, and therefore provide a cheaper alternative to FBGs and DTGs. As DTGs, low-loss backscattering elements do not exhibit frequency encoding, thus they need other measurement techniques, such as time of flight—OTDR. The use of low-loss enhanced back-reflection fibres will likely continue to impact also distributed sensing, because of the significant improvement it provides in the signal to noise ratio [ 8 ].
The large cost of current detection units is due to the multiple elements present in the sensing system, which relies on the diffraction of light to spatially spread the various frequency components and selectively monitor small wavelength ranges for detection. Cheaper spectrometers have been proposed using speckle patterns [ 9 ], and further development into a temporally stable configuration might provide a future solution for an overall cheaper interrogation unit.
Integration will continue to play a significant role, and will be key to decrease cost of both source and detections systems.
Point sensors
Point sensors include a large variety of sensors, including vibration, temperature, chemical, biological, acceleration and rotation, just to cite a few.
Amongst point sensors, the fibre optic gyroscope (FOG) represents arguably the biggest success, with strong deployment in aerospace and defence. Estimations of the current global FOG market size differs widely, but mostly within the range USD 0.8–1.4 B, with a CAGR of 3.6%–8.1% over the next decade. This is likely to remain the most successful point sensor, because of the increasingly wide deployment in drones, UAVs and of the better gyro performance at smaller rotations, which allow for an increased number of underwater applications. Research for better performance in the angular random walk (ARW) and bias instability regions will continue to involve longer stretches of fibres, better fibre wrapping configurations and more performant lasers. Yet, disruption will likely come from the use of novel fibres, such as multicore fibre or hollow core fibres: nested anti-resonant hollow core fibres (ARHCF) have exhibited an extremely small backscattering, which could be 45 dB smaller than conventional telecom fibres [ 10 ], thus allowing for a significant decrease in the ARW.
For the wider group of point sensors, the next decade should see more sensors deployed in spectral regions now considered forbidden because of the current fibres’ poor transmission: chemical sensors will benefit from the extended operation in the mid-IR wavelength region provided by hollow core fibres. The hollow fibre core and the small overlap (often smaller than 10 −4 ) of the propagating mode with the optical fibre grass structure allows to use the fibre as a gas chamber with minimal gas volume and optimal overlap between gas and optical mode, thus minimising the amount of gas needed for testing.
Extrinsic sensors
As in this class of OFs optical fibres only have the task to carry light between source, transducer and detection system, most of extrinsic OFSs are frequently not included in the assessment of the OFS market. Yet, optical fibre endoscopes are often considered an exception, as they represent the first OFS [ 11 ], created well before low-loss optical fibres were developed. The global endoscopy devices and equipment market in 2022 is estimated to be USD 7.9 B, with an expected growth to USD 10.6 B by 2026 at a CAGR of 7.7%. The next decade will continue to see significant growth in this market: research in the long wavelength region will increase, as mid-IR cameras with large number of pixels are becoming increasingly cheaper and multicore fibres/fibre bundles with low attenuation at long wavelengths are being developed.
Concluding remarks
Disruptive developments in the OFS field are likely to come from the wide range of directions (figure 1 ), mostly related to novel fibres. Hollow core fibres, with low attenuation, increased transparency window, small Rayleigh backscattering and minute modal overlap with the fibre glass structure will promote new developments in gyroscopes, sensors for harsh environments and nuclear fusion. Multicore fibres will provided additional referencing and might result in advantageous performance in shape sensing and gyros. Enhanced backscattering fibres will extend the sensing range beyond 200 km from a single end, increasing the deployment in marine environments and earth science.
The thrust for cheaper optical sources and detectors will continuously decrease the cost of OFSs, making them competitive for a wider range of applications now dominated by other forms of sensors. Cost competitiveness could open a new field of fibre sensors to the home, where the broad deployment of optical fibres in residential settings might allow for their prompt use in sensing.
Acknowledgments
The authors acknowledge funding from EPSRC (Grant EP/S013776/1), The Royal Society (London) (CHL\R1\180350), and NERC (NE/S012877/1).
Specialty fibres for sensing applications
Xian Feng
Jiangsu Normal University, People’s Republic of China
Status
Historically, specialty fibres have experienced a spiral rising evolution, from using simple core/cladding structure and simple-material [ 12 ] (e.g. silica and other non-silica glasses, which were well developed before the burst of the telecom bubble), to introducing wavelength-scale microstructure features [ 13 ], and at the latest integrating multi-material, multi-structure, and multi-functionality in the fiberized platform [ 14 ].
An optical fibre sensor is for detecting the physical or chemical information in the surrounding environment. The fibre output signals contain the information in either spatial, temporal, frequency, polarization, or phase domain, due to the interactions between the propagating optical modes along the fibre core and the external fields. Three fundamental components, the light source, the fibre medium (i.e. sensing element), and the detector (i.e. translator), are necessary for fulfilling the desired sensing function. Specialty fibres are advantageous over the conventional optical fibres, due to their tailored fibre material(s) and structures for the enhancement of the interaction between the optical modes within the fibre and the external fields [ 14 ].
Due to the compactness and the flexibility, specialty fibres are widely used for many advanced sensing areas (see the schematic plot of figure 2 ), including personal health, energy, environment, the emerging pathogens detection and characterization, autonomous systems and robotics, and hyper-accurate positioning, navigation, and timing (PNT). Most of those areas can be assigned into the category of critical and emerging Technologies.
Current and future challenges
The challenges to specialty fibre sensors are mainly from the practical demands, while there are many other counterpart solutions. One of the most competitive technologies is the electronic chip sensors, originating from the semiconductor industry and are capable of directly generating electrical signals by the external stimuli. The competition with these counterpart technologies and the perform-or-perish trend applies high pressure on the development of specialty fibre sensing technology.
A comprehensive coverage of the current and future challenges for specialty fibre sensors will not be possible due to the ignorance and limited expertise of the author. Nevertheless, this section tries to address this issue by highlighting some recent hot subjects in the following: (i) Wearable products have been widely commercially available, for the usage of monitoring personal daily activities and health. With the assistance of many embedded sensors, such products are capable of monitoring heart rhythms, blood oxygen saturation, blood pressure, and so on. Since these components have limited contact area with human’s surface area, only limited physical information of human body can be retrieved. (ii) In terms of the energy applications, lithium batteries, which have high charge density and disposability, are widely used in smartphones and electric cars. However, fire and explosion could occur when the Li batteries start degrading after being used for a certain period. (iii) As the largest pathogen disease in this century, SARS-CoV-2 Coronavirus (COVID-19) disease has caused confirmed cases over 0.5 billion and deaths over 6 million. Due to the rapid spread rate of the disease and the large number of the infections and deaths, a fast, accurate and economic testing approach are crucial for the mass surveillance of SARS-CoV-2 infections. The traditional nucleic acid test still need long waiting time for results. (iv) The cutting-edge robotics and autonomous systems require sensors for real-time position, shape, posture tracking. (v) PNT is commonly provided by global positioning system (GPS) constellations. However, GPS becomes problematic, for example when the user is inside an underground tunnel or in a submarine or even for rapidly developing autonomous underwater vehicles. The rising geopolitical conflicts lead into the real concern that such satellite-based systems could be severely interferenced or damaged, making the systems useless. A self-sufficient navigation system that can aid navigation for both civilians and military users, in case that GPS is catastrophically crashed.
Advances in science and technology to meet challenges
(i) Distributed smart clothes made of multiplexed optical fibres have been developed as a multimodal wearable sensor for on-site detecting multiple physical or chemical parameters of large area of human body [ 15 ]. (ii) Either hollow-core fibre sensors or evanescent-field tapered fibre sensors can be embedded into the inner Li-battery for spectral in-situ analysis. Hollow-core optical fibre sensors have been demonstrated for operando Raman spectroscopic investigation of Li-ion battery liquid electrolytes. The hollow-core fibre functionalizes not only as the microfluidic channel to sample electrolyte liquid with a volume of μ l but also as the waveguide to send the excitation laser signal in and retrieve the Raman signal out [ 16 ]. (iii) The specialty fibre based virus sensors require the assistance of a certain enhancement mechanism because of the sub-nanometre size of viruses and the weak signals. A fibre-based surface plasmon resonance (SPR) sensor can enhance the weak signals generated from the interaction between the excitation laser and the viruses by a few orders of magnitude. For such fibre sensors, D-shaped fibres or hollow-core fibres are normally used. Nanostructured metal features are deposited, either on the outer surface of the D-shaped fibre or on the inner core surface of the hollow core fibre. A relatively fast response time of ∼10 min can be obtained to verify a positive virus carrier, in comparison with the typical 3–4 h of performance time when the traditional method is used [ 17 ]. (iv) FBG arrays inscribed in multicore optical fibres have been proven to be a powerful tool for sensing 2D and 3D curvature and shape with high resolution and accuracy. Hence it enables the cutting-edge applications for example smart robotic surgery [ 18 , 19 ]. (v) The geomagnetic navigation technology is a promising alternative for GPS navigation, because the earth’s magnetic field is the inherent feature of the earth and can be mapped and tracked [ 20 ]. The existing magnetic sensors have many shortcomings, including low sensitivity, large volume, high power consumption, which do not fit the requirements of long-term underwater operations. One of the effective technical solutions should be fibre magnetic-field sensors utilizing magneto-refractive properties of rare-earth doped glasses [ 21 ]. With a proper selection of paramagnetic rare-earth dopants and optimized fabrication controlling for achieving low-loss fibre, highly sensitive magnetic field detection with sensitivity of fT-level should be realized using hundred-meter-long highly birefringent fibre.
Concluding remarks
The rapid change of the modern society provides great challenges but also great opportunities for the development of specialty fibre sensors. The ultimate strategy of future specialty fibre sensor technology to deal with the challenges should be to balance the combination of materials and structures for achieving the desired sensing functionalities [ 14 ].
Acknowledgments
This work is supported by the National Natural Science Foundation of China (NSFC, 62175096), Jiangsu innovation and entrepreneurship Team, Priority Academic Program Development of Jiangsu Higher Education Institutions, and Jiangsu Collaborative Innovation Centre of Advanced Laser Technology and Emerging Industry.
Micro- and nano-engineered sensors
Lei Zhang
Zhejiang University, People’s Republic of China
Status
Back in 1966, Kao and Hockham initiated low-loss optical fibres, which had quickly found extensive applications in optical communication and sensing. To date, distributed fibre-optic sensors, photonic crystal fibre sensors, and chalcogenide glass fibre sensors have been extensively studied and found various applications. With the rapid progresses in nanotechnology and flexible opto-electronics, there is an increasing demand for high performance sensors with faster response, smaller footprint, higher sensitivity, and lower power consumption to explore the limit of detection of force or the interactions between molecules to understand the fundamentals of physics, biology, and medical science, which spurred great efforts for micro and nano-engineered optical sensors. Since the probing light wavelength is close to or below the dimension of the micro and nano-engineered structures, these sensors offer more flexibility in tailing light for sensing weaker light–matter interactions.
Since the first demonstration of subwavelength-diameter silica micro/nanofibers (MNFs) for low-loss optical waveguiding in 2003, MNFs have attracted considerable attention due to engineerable strong evanescent fields and excellent mechanical properties, which makes them ideal building blocks for waveguiding sensors on micro/nano scale. To assemble an MNF based sensor, an MNF should be well packaged to avoid environmental disturbance or surface contamination. Benefiting from the networks of microchannel, the optofluidic system can protect the MNFs from unintended stimulation, provide small volume of sample for the MNF, and renew the MNF surface, making the MNF suitable for detect ultratrace molecules in solution [ 22 ]. On the other hand, the embedded MNF is a multifunctional detector for real time monitoring the microflow status, which is important for the feedback control of an optofluidic system [ 23 ].
In addition to the waveguiding structures, the resonant structures can significantly enhance light-matter interactions, making them ideal candidates for highly sensitive sensors. For example, optical whispering-gallery mode (WGM) microresonators (e.g. microspheres, microdisks, and microtoroids), confining resonant photons in a microscale volume, have been used for the detection of materials in different phases and forms, including gases, liquids, and chemicals [ 24 ]. Different from the WGM resonators, metal nanostructures (e.g. noble metal nanoparticles) provide a mode size much smaller than the vacuum wavelength of the light and comparable with the cross section of biomolecules, making them favourable for single molecule or particle sensing. Overall, both waveguiding and resonating sensors hold great potential for next generation sensing applications.
Current and future challenges
In the past few years, we have witnessed the success in micro- and nanoengineered optical sensors; however, more challenges may come from fabrication, practical applications, and sensing mechanism innovations. Firstly, from the fabrication side, as the feature sizes go down to subwavelength scale, the high precision, cost effective and scalable fabrication technique is a key issue related to the sensing performance and potential for practical applications. For example, how to draw MNFs with controlled diameter, functionalize MNFs with high repeatability, and automatically package MNFs with high robustness remain challenging. For the high quality WGM resonators, delicate fabrication process, expensive instruments, and complicated coupling process limit their applications in both scientific research and practical applications. Noble-metal nanocrystals represent an important class of materials for localized SPR (LSPR) and surface-enhanced Raman spectroscopy (SERS) based sensing. To move from academic studies to practical applications, one has to address the issue of scaling up a small batch-based synthesis of the nanocrystals.
Secondly, from the application side, there are two typical areas: scientific research and practical applications. For example, when the detection limit of microforce is down to fN level or smaller, the sensor can be used to measure of critical Casimir forces, optical scattering forces, and optical momentum. With the rapid development of in health care, energy, robotics and AI, there is an increasing demand for novel sensors to meet the need from these areas. For example, electronic skin (E-skin) can simultaneously differentiate among various physical stimuli from the complex external environment, however, its ultimate performance is fundamentally limited by the nature of low-frequency AC currents.
Thirdly, to meet the challenges in the abovementioned cutting-edge applications, new sensing structures and sensing mechanisms are highly desired. For example, current leakage due to insufficient insulation, and high sensitivity to electromagnetic disturbances are still challenges for E-skin sensors. An alternative to E-skin is the detection of pressure, strain, bending, and temperature by optical sensors due to their inherent electrical safety, immunity to electromagnetic interference, and small size. Note that multiparameter (e.g. pressure, strain, and temperature) signals often mix together, how to realize an efficient decoupling of the output of a fibre-optic sensor should be considered for real applications.
Advances in science and technology to meet challenges
From the fabrication side, to control MNF diameter, the cutoffs of high-order modes were real time monitored during the fibre-pulling process. By accurately measuring the time interval between two drops, the diameter precision can be less than 2 nm with a transmission as high as 99.4% [ 25 ]. To address the challenge faced by the inorganic microcavities, polymers, such as poly(methyl methacrylate) (PMMA), epoxy resin, and SU-8, have received considerable attention due to their potential for devices with advanced functionalities not attainable by inorganic materials [ 26 ]. To scale up the production of noble metal nanocrystals, continuous flow synthesis based on droplets has proved to be an effective platform for large scale synthesis of shape-controlled nanocrystals.
To meet the increasing demand for novel applications, there have been a great number optical fibre sensors have been reported recently. For example, to overcome the limitation of face by E-skins, MNF was used to assemble ultrasensitive optical skin sensors (figure 3 (a)), which can detect weak pressure with ultrahigh sensitivity (1870 kPa −1 ), low detection limit (7 mPa) and fast response (10 μ s) [ 27 ]. To understand ion transport kinetics and electrolyte-electrode interactions at electrode surfaces of batteries in operation, an optical fibre plasmonic sensor capable of being inserted near the electrode surface of a working battery was demonstrated (figure 3 (b)) [ 28 ].
Furthermore, new sensing mechanisms or structures could be introduced into optical sensors with micro- or nanoengineered structures. For example, optical fibre tip devices have miniature sizes, diverse integrated functions, and low insertion losses, making them suitable for high-sensitivity nanoforce measurements (figure 3 (c)) [ 29 ], in situ early monitoring of cellular apoptosis (figure 3 (d)) [ 30 ], cancer sensing and therapy [ 31 ]. Although tremendous efforts have been made in developing novel nanomaterials/nanostructures for high performance sensors and environmental remediation [ 32 , 33 ], nanosafety is paramount considering the risks associated with manufactured nanomaterials [ 34 , 35 ].
Concluding remarks
The development of micro- and nanoengineered structures enable optical sensors with improved performance, in terms of footprint, sensitivity, and response time, that are not possible with conventional optical sensors. Some important progress has been made and further new advances are expected in the areas of ultrasensitive optical force sensors, wearable sensors, and optofluidic-chip-based sensors. The practical realization of micro- and nanoengineered sensors requires advances in the fabrication and integration techniques, a better understanding of multidisciplinary sciences, and taking advantage of new physical effects.
Acknowledgments
This research was supported by National Natural Science Foundation of China (No. 61975173), Major Scientific Research Project of Zhejiang Lab (No. 2019MC0AD01), and Key Research and Development Project of Zhejiang Province (No. 2021C05003).
Whispering gallery mode sensors: towards spatially resolved and spatially independent detection
Misha Sumetsky
Aston Institute of Photonic Technologies, Aston University, Birmingham, United Kingdom
Status
The emerging field of optical microresonators includes research and development of individual and coupled planar and essentially three-dimensional microresonators devices. The general functional scheme of a microresonator device is shown in figure 4 (a). The performance of these devices is usually characterised by the spectrum of output resonant light. In contrast to optical signal processing and spectroscopic applications, for sensing applications, the microresonators are designed (e.g. specially shaped and coated) so that their optical parameters are sensitive to variations of selected material characteristics within their volume and closely adjacent medium [ 36 ]. Due to the large Q-factor of a broad range of optical microresonators, their resonant spectra can be very sensitive to these variations.
Whispering gallery mode (WGM) microresonators having the characteristic shapes of a sphere (figure 4 (b)), toroid (figure 4 (c)), and bottle (figure 4 (d)) represent a key class of microresonator sensors [ 36 ]. Their importance is caused by a relatively large surface area open to the environment, which can be probed by evanescent WGM tails, and exceptionally large Q-factors. For practical applications, these microresonators can be placed on a chip and coupled to robust input-output optical waveguides rather than microfiber tapers shown in figure 4 [ 37 ]. The WGM optical sensors can be divided into those detecting changes at the microresonator surface (e.g. appearance and variation of properties of micro/nanoparticles (NPs) with dimensions down to a single atoms and molecules) and changes of the bulk microresonator material parameters (e.g. temperature and stress). Of special importance are the elongated and shallow bottle microresonators, also called SNAP microresonators (SMR), shown in figure 4 (d) [ 38 ]. A particular advantage of SMR is that they can detect changes at their external surface as well as (similar to bubble microresonators [ 36 , 38 ]) at the internal surface if fabricated of thin wall microcapillaries (figures 4 (e) and (f)). In the latter case, these devices act as microfluidic sensors.
The goal of this roadmap is to discuss several situations when application of WGM microresonators can be beneficial compared to other sensing methods with the examples based on applications of SMRs.
Current and future challenges
Most of microresonator sensing methods developed to date detect environmental changes with the spatial precision that does not go beyond the characteristic microresonator dimensions. For example, it is commonly suggested that a resonance shift caused by a microparticle indicates on its appearance or displacement rather than its actual coordinates at the microresonator surface [ 36 ]. However, in certain cases, more detailed spectral analysis allows us to obtain the information about the microparticle location. For example, the absence of shift of one of the spectral resonances as opposed to finite shifts of others may suggest that the microparticle is situated at the node of the corresponding eigenstate.
Is it possible to develop a comprehensive approach that can give us the detailed information about the spatial distribution of changes happening within the microresonator volume and adjacent medium? Generally, the information contained in the resonant spectra measured as a function of time at the fixed input–output waveguide position, or even continuously as a function of waveguide coordinate and time, is not sufficient to restore the spatial distribution of the microresonator refractive index and its shape. Nevertheless, looking for the situations when this problem can be solved is of great interest.
Alternatively, we may be interested in detecting physical and chemical processes happening with micro/NPs and molecules rather than their positions. In this case, variation of the resonant spectrum caused by these processes, which takes place at the background of the displacement of individual particles, should be extracted. Solution of this problem is more challenging than just detecting the microparticle positions noted above. Nevertheless, we show below that this problem can be addressed easier for specially designed shapes of SMRs.
Advances in science and technology to meet challenges
Here we present potentially feasible approaches to address the problems of spatially resolved and spatially independent sensing based on the recent progress in the development of the SMR theory [ 39 , 40 ] and new methods of SMR fabrication [ 41 – 43 ].
We start with the simplest problem of sensing the evolution of a heated droplet in a silica microcapillary illustrated in figure 4 (e). It was shown in [ 41 ] that a water droplet induces a SMR inside a silica microcapillary which axial width is equal to the width of the droplet. In particular, the reduction of the droplet width due to the evaporation was detected with nanometer accuracy. It follows from the results of [ 41 ] that the displacement of the droplet edges along the microcapillary can be restored with the same nanometer precision by monitoring the WGM transmission spectrum for a fix position of the input-output waveguide.
A more general approach to detect the microfluidic components in an optical microcapillary suggested in [ 39 ] is illustrated in figure 5 (a). In this case, sensing of micro/NPs floating in liquid can be accomplished with a specially designed SMR. The spectrogram of such SMR recently fabricated in [ 42 ], which is extracted from figure 5 (d2) of the latter paper, is shown in figure 5 (b). When a microparticle enters and moves inside the volume of SMR close to its internal surface (the latter proximity can be ensured by employing a 1D microfluidic channel illustrated in figure 4 (f)), we sequentially observe shifts of eigenwavelengths of wider eigenstates followed by shifts of eigenwavelengths of narrower eigenstates. Observation of the WGM transmission spectrum of the SMR with an input-output microfiber positioned in its centre potentially allows us to determine the axial displacements of a single and several micro/NPs. The proof-of-concept experimental demonstration of such nonlocal microfluidic sensing has been recently presented in [ 44 ]. Similarly, the knowledge of the WGM transmission spectrum allows us to determine the SMR profile, spatial temperature distribution (rather than the average temperature [ 36 ]) and distribution of irreversible temperature-induced material changes [ 42 ] by solving the inverse problem [ 39 ]. It was suggested in [ 39 ] that WGMs appropriately populated with light can be used as microscopic optical tweezers manipulating microparticles.
Alternatively, a feasible solution to separate the effect of microparticle displacement on the observed SBM spectrum from more complex physical and chemical processes happening with a single or several microparticles was recently proposed in [ 40 ] where SMRs having an eigenstate with uniform field amplitude along extended part of the SMR surface area were designed. These SMRs, illustrated in figure 5 (c), were called the bat microresonators since the profile of its original design [ 40 ] resembled the profile of a bat. A bat microresonator with the spectrogram shown in figure 5 (d) was experimentally demonstrated in [ 43 ]. Displacement of a microparticle along the surface area with a uniform eigenstate amplitude can be separated since it does not perturb the corresponding eigenwavelength (in contrast to other eigenwavelengths of this SMR) unless the microparticle changes its structure or orientation. We suggest that the bat SMRs can find important applications in sensing as well as in quantum technologies where positioning of maximum possible number of quantum emitters at the resonantly enhanced and spatially uniform region of light is required [ 45 ].
Concluding remarks
It is challenging and often impossible to reveal the details of processes taking place at the optical microresonator from its transmission spectra. However, in special cases, characteristics of sensing objects can be extracted from the resonant spectra with exceptional precision. In this roadmap, we discussed prospects of sensing microfluidic components, such as micro/NPs, and the feasibility of independent detection of their displacement, as well as detecting the spatial distribution of temporal and irreversibly induced material changes along the specially designed SMRs.
Acknowledgments
The author acknowledges support from the Engineering and Physical Sciences Research Council (Grants EP/P006183/1 and EP/W002868/1), Wolfson Foundation (Grant 22069), and Leverhulme Trust (Grant RPG-2022-014).
Single-molecule whispering-gallery mode sensing at quantum limits for investigating photocatalytic reactions and key processes in quantum biology
Callum Jones, Srikanth Pedireddy and Frank Vollmer
Department of Physics and Astronomy, Living Systems Institute, University of Exeter, United Kingdom
Status
WGM Sensors (figure 6 (A)) utilise the exceptional quality factor of dielectric micro cavities such as glass microspheres and the strong localization of electromagnetic fields by metal NPs such as gold nanorods. The new class of optoplasmonic WGM sensors provide very high detection sensitivity in biosensing. They have enabled the detection of the smallest chemical species in solution, that is single atomic ions [ 46 ]. More recently, they have been used to study the dynamics of biomolecular reactions catalysed by enzymes such as the maltose-inducible α-glucosidase (MalL), revealing the MalL conformational state transitions and a negative activation heat capacity [ 47 ]. WGM optoplasmonic sensors have been used to study reactions of small molecules on the gold NP surface [ 47 , 48 ], such as the reversible disulfide reactions and interactions of oligonucleotides and agrochemicals. These various demonstrations have promising healthcare and environmental sensing applications, especially for WGM optoplasmonic sensing integrated with microfluidics on sensor chips [ 49 ].
The plasmonic nanorod is the key element of the optoplasmonic WGM sensor that provides single-molecule detection sensitivity. Essentially, the optoplasmonic sensors can be considered single-molecule LSPR (localised SPR) sensors [ 50 , 51 ]. The plasmon resonance-based sensing at the single-molecule level opens up exciting opportunities for investigating the effects of the localised near field on the chemical reaction [ 52 ], improving the limits of detection with plasmonic nanostructures with strong near field enhancements, investigating the ligands interactions and ligand exchange on the plasmonic NPs [ 48 ], finding ways for the selective immobilisation of analyte molecules to the plasmonic hotspots, and new sensing modalities that require strong scattering of the NPs [ 51 , 53 ]or plasmonic heating effects such as thermo-optoplasmonic (TOP) sensing. In some cases, interesting photochemistry can be unravelled, for example by observing DNA hybridization kinetics on the gold nanorods [ 54 ].
It has been demonstrated that intense enhanced field confinement (hot spot) on the NP surfaces can be achieved by the introduction of nanoscale features such as spikes to the surface of NPs. Hence, there is a surge in research on the synthesis of metal NPs with sharp tips, corners, and edges (figure 6 (B)). The optoplasmonic sensors are currently optimised by choice of plasmonic NP/structure [ 51 ], fabrication of high-quality microcavities [ 53 ], microfluidic integration which can improve sample delivery and reduce fluidic and thermal sources of noise [ 55 , 56 ] and by operating the devices at their fundamental detection limits. When laser shot noise dominates, the precision of optical measurements using classical light is limited by the quantum noise limit (QNL) [ 57 , 58 ]. To date, single-molecule detection operating at the QNL has been demonstrated using a tapered nanofiber sensor with a dark field heterodyne measurement [ 59 ]. However, it is possible to surpass the QNL at a given optical power by using quantum correlated light sources, for example, entangled photon pairs and squeezed light. As an example of how quantum optics can be applied to measurements using WGM micro resonators, Li et al [ 60 ] presented a magnetic field sensor with a 20% improvement in sensitivity by using squeezed light.
Current and future challenges
The coherent collective oscillation of free electrons in metal NPs may decay either radiatively in light or non-radiatively, producing high energy electron-hole pairs, termed hot carriers. These hot carriers may relax via electron-phonon coupling locally heating the particle or reach the particle surface and transition into unoccupied levels of acceptor adsorbates and trigger chemical reactions. Investigating the photochemistry on the optoplasmonic WGM sensors represents one of the challenges and potentially highly promising fields of study in hot carrier technologies, where hot carriers can catalyse chemical reactions by interacting with external molecules at the particle surface (figure 6 (B)) [ 61 , 62 ]. This includes the degradation of organic pollutants from wastewater, hydrogen generation by solar water splitting and reduction of CO 2 , amongst others. The photocatalytic efficiency depends on various factors, such as hot carrier generation rate, hot carrier energy distribution, rate of adsorption of molecules and chemical stability of the photocatalyst.
One future challenge is combining single-molecule sensors with quantum technology which promises to be the next great leap in optical sensing technology. Quantum optical measurements and sensing with WGMs promise to make this leap happen. The future challenge is to explore the sensitivity limits of WGM single molecule sensors. If the quantum noise limited regime can be reached [ 59 ], then the sensitivity could potentially be improved even further by applying entangled photons and squeezed light, as demonstrated in [ 60 ]. Existing measurement schemes from quantum metrology will guide these efforts [ 57 , 58 ].
An improved signal to noise ratio in WGM signals would mean higher confidence in the detection of small molecules, but also could reveal more information present in transient interaction signals. Measurements exploiting quantum correlations are promising for developing highly non-invasive sensors by achieving better sensitivity than that given by the QNL, at low optical powers. These could find applications in the study of photosensitive samples highly prone to photodamage. These next-generation single-molecule sensors will deliver revolutionary advances in our ability to detect biomolecules, investigate, and exploit their complex quantum chemistry.
Advances in science and technology to meet challenges
Interesting chemical systems are starting to emerge that are investigated on plasmonic NPs and that lend themselves to WGM sensing. Xu et al [ 63 ] investigated the mechanism of surface plasmon‐assisted catalysis reactions of pATP (p-aminothiophenol) to and back from DMAB (p,p′-dimercaptoazobenzene) on a single Ag microsphere under an atmosphere containing O 2 and H 2 O vapour (figure 6 (B)). pATP was converted into DMAB due to the energy transfer (plasmonic heating) from SPR to the surface‐adsorbed pATP. Under this condition, oxygen, which acts as an electron acceptor, was essential for the conversion reaction, and H 2 O, which acts as a deprotonation agent, accelerated the reaction. On the other hand, the presence of H 2 O, acting as a hydrogen source, induced the hot electron‐promoted reverse reaction (conversion of DMAB to pATP). Further, the photocatalytic oxidation reaction (conversion of pATP to DMAB) is strongly dependent on pH value. The inclusion of secondary metals such as Pt or Pd into the gold nanorods can greatly enhance their photocatalytic performance compared to the individual components [ 64 ]. For example, Majima et al demonstrated the plasmon-enhanced catalytic formic acid dehydrogenation on Pd-tipped Au nanorods at lower temperatures.
To achieve the goals in quantum sensing, first, we must establish WGM sensors operating at the QNL by using techniques such as homodyne detection, and methods to limit or compensate for thermal noise. Then, by developing measurement schemes using entangled photon pairs or quadrature squeezed states with WGM resonators, the QNL could potentially be surpassed for single-molecule detection (figure 6 (C)). Entangled states such as N00N states can improve phase resolution in an interferometer beyond the QNL, while squeezed states of light may be used to reduce the phase or intensity noise of a measurement below the shot noise level, depending on the type of squeezed state generated [ 57 , 58 ].
Integrating components into on-chip devices could significantly reduce the complexity of the setups required in the long term. Indeed, using recent advances in integrated quantum optical devices, on-chip quantum optical biosensors could be an exciting future area of research [ 57 ].
This is the right time to develop quantum sensing on WGM sensors. The single-molecule sensing capabilities of the optoplasmonic sensors will leapfrog with the application of quantum optics techniques and quantum technologies. Similar to the large-scale LIGO interferometer, where the application of quantum optics allowed the investigation of gravitational waves on the km-scale, applying quantum optics to micron-scale single-molecule sensors will allow the exploration of quantum biology & chemistry of molecules on the nanometre-scale. Quantum-optical WGM sensors will deliver unparalleled capabilities for sensing single molecules at the relevant length and timescales, surpassing classical limits of optical detection and unravelling new quantum phenomena. The sensors will offer insights into the quantum properties of living matter.
Concluding remarks
Although numerous researchers have contributed to the progress in hot electron‐induced chemical reactions, offering potentially high chemical reaction efficiency, this field is still in its infancy in terms of practical applications. WGM sensing of photocatalytic processes and at very high detection sensitivities (one electron turnover) may provide scientific breakthroughs in plasmonics and chemistry and open a new era of practical utilization of hot electrons in various chemical reaction systems.
Researchers will apply the quantum optical biomolecular WGM sensors to reveal the fundamental, quantum properties of key biomolecular systems and prepare the ground for their future exploitation. WGM sensors may well uncover and control the quantum properties of i.e. fast-acting enzymes, photochemical reactions that produce key metabolites, and of magneto-sensitive neuroproteins. These breakthroughs will have numerous and varied applications, for example in ultrasensitive sensing for better health and environment, and in novel brain sensing and intervention methods.
Laser based sensors
Peter D Dragic
University of Illinois at Urbana-Champaign, United States of America
Status
It is often anecdotally stated that the laser appeared as a ‘solution seeking problems’ [ 65 ]. The naysayers promoting this viewpoint at that time were not witness to the technological struggles of the light-based sensing systems of the era. For example, the very first pulsed optical rangefinders were developed as early as the 1930’s [ 66 ]. However, progress in this area stagnated partly due to a lack of appropriate light sources, namely those with sufficient brightness. Within just a few years following Maiman’s demonstration of the first ruby laser in 1960, the world was witness to the first uranium, HeNe, Nd:Glass, GaAs (semiconductor), YAG, Xe, Ar + , CO 2 , chemical, and dye lasers, among many others. In addition to continuous-wave (CW) lasers, Q-switching and mode-locking were also demonstrated within these golden years. This yielded a flood of new light sources that produced a wide range of colours, power levels, and beam characteristics. For those who already understood the ‘problems’ these lasers could address, the experience (or technological leaps) over those few years must have been much akin to handing someone a blow torch soon after they first learn how to start a campfire.
Moving into the modern era, lasers and optical remote sensing are inextricably coupled, and likely always will be. There is, therefore, no further need to justify the purpose of this section: to promote continued advancements in laser development for lightwave sensing applications. The previous Roadmap put forward a definition for laser-based sensors as those that ‘are distinguished from other optical sensors from the perspective that the measurement is entirely based upon the direct detection of laser light itself without relying on any external signal-transducing elements to the target object besides its ambient medium [ 14 ].’ Adopting this framework, it is the design and engineering of novel lasers, their capabilities and characteristics that will bring about improvements in system sensitivity, resolution, and accuracy. Laser-based sensing applications (figure 7 ) include ellipsometers, imaging, spectroscopy, vibrometry, interferometry, lidar, and quantum [ 67 ]. Each comes with its own set of system and resulting laser requirements that must be carefully identified by the user. While laser-based sensing systems may be static or on a moving platform, on the ground or in space, use just a few photons or require high power, laser design is always at the forefront, yet never without considering the detector properties that will be needed for the task (e.g. responsivity curve, sensitivity, SNR, etc).
Current and future challenges
No existing laser can serve every known sensing application. The complexity in deciding whether a source is appropriate lies in the many attributes, both optical and mechanical (and otherwise), that require careful consideration [ 68 ]. These are identified and optimized more or less à la carte . Several of these are listed below in the context of laser-based sensing. As an illustrative example, the Laser Interferometer Space Antenna project led by ESA as depicted in figure 8 , has strict requirements on both laser coherence, phase noise, and opto-mechanical stability [ 69 ].
Laser wavelength . Typical spectroscopic systems, such as the laser detection of gases, require that the laser wavelength be tuned to or across an absorption feature and be stable to well below 1 pm over the measurement time. Often, this requires locking the laser wavelength to an external reference, such as a gas cell.
Laser spectrum . Coherent and spectroscopic systems require single frequency operation with sufficiently narrow linewidth. In the case of spectroscopic systems, most important is the spectrum associated with that to be measured. For coherent systems, the spectrum is driven by the desired coherence length.
Intensity and phase noise . Power instability, laser relative intensity noise, and phase noise further degrade the SNR. There is a growing need to understand the origins of these noises and how to suppress them. This often includes frequency ranges from the millihertz to beyond gigahertz.
Pulse characteristics . Time-of-flight systems, and often those that measure a dynamic quantity, require that the laser be pulsed. The pulse width may set the resolution of the system while the pulse repetition frequency may be set to prevent measurement aliasing.
Beam characteristics . This relates to whether the beam is diffraction limited or not, its divergence, and pointing stability. These characteristics will be particularly important as there is a renewed push into space. As an example, beam expansion relative to a distant receiver geometry reduces divergence and offsets pointing jitter, but in trade can reduce the received power.
Power and energy . This is related to system SNR which in the shot noise limit is proportional to the square root of the number of received photons.
Laser efficiency and temperature . In the quantum limit, the highest possible laser efficiency is related to the quantum defect (QD) which, for optically pumped systems, is defined to be QD = 1– λ p / λ s where λ s and λ p are the signal and pump wavelengths, respectively. The QD also represents the minimum heat generated in a laser. Laser efficiency and thermal management are particularly important concerns in autonomous and space systems.
Thermomechanical environment and SWAP (Size, Weight, and Power) . Environmental influences such as temperature and vibration, etc, can introduce deleterious noises that impact system resolution. Space systems may have the additional need to minimize the impact of external radiation damage. SWAP requirements constantly push towards smaller, more efficient lasers. Figure 9 shows laser platforms on vastly different spatial and power scales.
Advances in science and technology to meet challenges
Lasers are continuously evolving to serve the new and emerging applications that need them. At the forefront of many relatively new areas for lasers, such as lidars for vehicles including self-driving cars, commercial airlines, and spacecraft, is their common call for the laser to be as wall-plug efficient as possible [ 70 ]. This could relate to the QD and the generation of heat, or quantum efficiency. In some cases, the laser may have to be cooled on mobile platforms where convective systems such as flowing water may be far too impractical. The use of anti-Stokes fluorescence can enable lasers that are self-cooling [ 71 ]. This will assist in the management of thermal noises in lasers whose phases must be tightly controlled. In addition to power (which also reflects efficiency) size and weight budgets drive towards smaller and smaller values. As far as optical properties go, beam geometry and spatial coherence are becoming increasingly important. For example, lasers with structured light beams [ 72 ] and random lasers for speckle-free imaging [ 73 ] are enabling new sensing modalities with greatly improved performance. Regarding spectrum, there is a need for lasers, both pulsed and CW, producing wider wavelength ranges in the vacuum UV to UV, and mid-to-far IR and into the THz range. This is partly due to the abundance of molecular and atomic absorption features that can be used to probe the various levels of our atmosphere as it relates to pollution, greenhouse gas emission, climate change issues, and space weather. For reasons of SNR, the laser spectrum may be narrower, have the same width, or be broader than the spectrum to be measured. Therefore, on-the-fly control of the power spectral density, such as via phase modulated systems are needed to adapt to the changing conditions of the measurand.
Two final comments should be made here regarding lasers. It is already widely understood that a great deal of mechanical engineering must go into laser design, in particular for those that are meant to be portable. New paradigms in making lasers completely immune to their environments (e.g. vibrations, changes in temperature, etc) will be needed to support the relevant applications, namely autonomous platforms [ 74 ]. The final consideration outlined here relates to the complexity in assembling and integrating the laser. Lasers that are more compatible with mass production, are self-aligning, and come with simplified, perhaps modularized redundancies, maintenance and repair will likely win the commercial race to the emerging markets.
Concluding remarks
The development of lasers and laser-based sensing share a parallel timeline. The former enables the latter, and until the relevant applications become obsolete or unnecessary, improvements to laser technologies will continue to drive system progress. Herein, current and future challenges, as they relate to laser properties and needed advancements in their science and technology, have been briefly outlined. A blossoming new frontier in lasers is their use in space-based applications, including ranging, docking support, interferometric sensors, space weather, and even lidars that can measure compositional characteristics of celestial bodies within our own solar system and beyond. These will require high-efficiency, wavelength-versatile, well controlled beam and low-SWAP lasers that can survive the harsh environment of space.
Acknowledgments
The author gratefully acknowledges funding from the U.S. Department of Defense Energy Joint Transition Office (DE JTO) (N00014-17-1-2546) and Air Force Office of Scientific Research (FA9550-16-1-0383).
Mid-infrared sensing
Ori Henderson-Sapir 1,2,3 and David J Ottaway 1,2
1 Department of Physics and Institute of Photonics and Advanced Sensing, The University of Adelaide, SA, Australia
2 OzGrav, University of Adelaide, Adelaide, SA, Australia
3 Mirage Photonics, Oaklands Park, SA, Australia
Status
The electromagnetic wavelength range between 3–8 μ m is commonly referred to as the mid-infrared (MIR) and it offers unique opportunities for the remote sensing of trace gas (see figure 10 ) and volatiles. These include key greenhouse gases like methane and ethane and atmospheric pollutants such as NO x and SO x . It also promises improved characterisation and detection of organic substances in various settings. An extraordinary potential exists for MIR based disease and medical conditions detection with breath analysis using an inexpensive platform to be deployed in medical clinics. Despite initial steps in these directions, this holy-grails of MIR sensing has yet to be fully realized.
The advantage that the MIR offers for the aforementioned applications is due to the strong characteristic absorption features present in this band, caused by rotational-vibrational bond excitation. The specific absorption features exhibited in this band are often referred to as the ‘absorption fingerprints’. These MIR absorption features can be two orders of magnitude stronger than their overtones in near-IR, which are currently used for sensing applications. Many of the techniques developed for the near-IR can be applied to the MIR to take advantage of the stronger absorption features present there.
The sensitivity of an active sensing platform is governed by the properties of its illuminating light source and detection system. In previous decades, the availability of convenient sources of coherent radiation in the MIR has been limited but there has been significant source development in recent years. Development in MIR detectors has occurred such that the detectors performance is approaching the limit set by thermal background radiation over much of the band.
MIR sensing applications are becoming more ubiquitous as visible and near-IR sensing techniques have bridged the gap to the MIR. Two notable examples are the usage of silica-glass based hollow core fibres and dual-comb spectroscopy. The latter is extremely promising for revolutionizing MIR sensing once MIR frequency combs in convenient and robust packages become available commercially, because at the moment they are still in their infancy.
Current and future challenges
The signal to noise ratio for the detection of greenhouse gasses and volatiles increases with the spectral brightness of the illuminating source, the collection area of the primary detection optic and the measurement time. Convenient MIR sources with high spectral purity, excellent beam quality and reliability must be developed to take MIR sensing from the lab to the field.
To increase the sensitivity of in-situ measurements, interaction length is often a key parameter. Microstructure fibres such as suspended or hollow core fibres allow light to interact with a trace gas over extended distances with small volumes. Much work is needed to translate these technologies that are mature in the near IR to the MIR. New MIR frequency comb sources will allow the revolution in parallel sensing in the near IR to be translated to the MIR.
Broadband ‘white light’ sources based on supercontinuum generation now routinely cover the entire MIR band and are becoming commercially available thereby making significant inroads especially in spectroscopy applications [ 74 ]. However, some applications call for the unique spectral purity and stability that frequency combs offer. Recently, there have been various MIR frequency comb sources demonstrated, but significantly more work is required to turn them into robust turn key sources [ 75 ].
Quartz-enhanced Photo acoustic spectroscopy techniques have been employed as an effective method for sensing of trace concentration of greenhouse gasses and volatiles [ 76 ]. Detection levels down to the parts per billion and even part per trillion were achieved using this mature detection technique. Even lower detection limits are possible with potentially higher power sources at the short end of the MIR.
Many challenges remain when working in the MIR. First and foremost is the lack of fibre-based components. Since the previous review, a few MIR based fibre components have been demonstrated in the literature suggesting a change in the trend, however, none have become commercially available. We expect a proliferation of MIR fibre lasers and sensing applications as components become commercially available. Therefore, creating a new MIR revolution, similar or even surpassing in magnitude the one that was enabled by availability of near-IR components.
Advances in science and technology to meet challenges
Significant advances in light sources and delivery are needed to realize the full potential of the MIR, these areas dominate the following discussion.
Interband and Quantum cascade laser diodes (ICLs and QCLs) cover the 3–6 μ m and 3–11 μ m bands respectively [ 77 , 78 ]. Their direct electrical excitation is convenient and currently multi-watt average power levels are available (for QCLs) at room temperature, in a significant fraction of the aforementioned band. Their small footprint provides a convenient platform for integration into compact trace gas sensing devices. Cascade lasers suffer from short upper state lifetimes, typically picoseconds for QCLs and sub-nanosecond for ICLs. This limits energy storage making achieving peak power challenging [ 78 ]. However, mode-locked operation was recently demonstrated in both QCLs and ICLs [ 77 , 78 ]. Work is needed to increase their spectral bandwidth and stability for practical comb-based spectroscopy applications.
Near-IR frequency combs have advanced lab-based spectroscopy, metrology and remote sensing. Extending these tools to the MIR will increase sensitivity with ppb sensitivity recently demonstrated [ 75 ], utilising the significantly stronger absorption MIR features. MIR micro-resonators generation and frequency shifting of near-infrared combs to the MIR have been demonstrated, including dual-comb methods [ 79 ], however, more work is needed to provide convenient and inexpensive comb-based sources in the MIR.
The average power emitted by MIR fibre lasers has increased to 40 W output at 2.7 μ m, 10 W at 3.2 μ m, 15 W at 3.4 μ m and 200 mW at 3.9 μ m [ 80 ], with increased power and longer wavelength emission an ongoing goal. Fibre lasers have long upper state lifetimes, allowing significant energy storage, making high peak power possible in a compact and rugged device. Ultrafast fibre laser operation has been demonstrated in the short MIR on various bands ranging from 2.7 μ m [ 81 ] to 3.5 μ m band [ 82 ]. In Q-switched operation the highest peak powers used the 2.8 μ m transition in erbium. Peak powers of 15 kW in a single transverse mode [ 83 ] have been demonstrated. Numerical modelling suggests that peak powers approaching 1 kW are possible from the 3.5 μ m transition in erbium [ 84 ]. Once spectral control is achieved at these peak powers, fibre laser MIR lidars will soon follow.
Pushing the output of fibre lasers significantly beyond 3.5 μ m needs lower phonon glasses such as Indium Fluoride, which demonstrated 200 mW average power at 3.9 μ m [ 85 ]. These glasses reduce absorption losses and non-radiative quenching [ 85 ]. Indium fluoride and various chalcogenide glasses are the main contenders for further advances since they have the lowest maximum phonon energy of the semi-mature soft glasses. Indium fluoride is a relatively mechanically robust glass suitable for laser emission in the 4–5 μ m range. Chalcogenide glasses have very low maximum phonon energies and the high optical transmission needed for emission beyond 5 μ m. However, despite significant effort, only near-IR chalcogenide-based fibres lasers have reported power levels greater than a milliwatt and many challenges remain.
Manufacturing micro-structures in MIR transmitting glasses is challenging. Silica-based ARHCF offer an interesting alternative since light is guided within a hollow core with only minimal overlap with silica glass features, see figure 11 . ARHCFs have revolutionised guided light delivery and fibre-based sensing in the MIR. The transmission loss of ARHCF fibres now approaches record low-levels for near and MIR transmission. This long interaction length opens the possibility of reaching ppt detection levels of trace gasses in a compact form [ 86 ].
Suspended core fibres allow the interaction of the evanescent field with the substance under test. The evanescent field in MIR operation has significantly greater extent which increases the interaction with analyte placed outside the core [ 87 ]. Exposed suspended core fibres remove the slow process of filling the fibre. Suspended and exposed core fibres are mature technology for near-IR wavelengths but are yet to be demonstrated in MIR transmitting glasses such as fluorides or chalcogenides.
Concluding remarks
Development opportunities are abundant in the MIR range for improving trace gas sensing techniques which are highly sensitive as well as selective. The field of MIR sensing benefited significantly from the development and advances in recent years of new light sources. However, for MIR sensing to ultimately fulfil its promise, there is much more yet to be done.
Acknowledgments
This work was performed, in part, at the Opto Fab node of the Australian National Fabrication Facility supported by the Commonwealth and SA State Government. We thank Dr Erik Schartner for providing the AHRCF image.
This work was supported in part by the US Air Force Office of Scientific Research Award FA-9550-20-1-0160 and an Australian Research Council Discovery Grant 220102516 DP.
Terahertz sensors and sensing with terahertz
Elodie Strupiechonski 1 , Goretti G Hernandez-Cardoso 2 , Arturo I Hernandez-Serrano 3 , Francisco J González 4 and Enrique Castro Camus 2
1 CONACYT-CINVESTAV-Queretaro, CIDESI, Mexico
2 Philipps-Universtät Marburg, Germany
3 University of Warwick, United Kingdom
4 Ciacyt-UASLP, Mexico
Status
Terahertz (THz) radiation is usually defined as the region of the electromagnetic spectrum ranging from the high end of the microwave band to the lower end of the MIR (0.3–30 THz, 100 mm–10 μ m, 10–1000 cm −1 , 1.24–124 meV). Significant advancements toward improving the efficiency of both THz detectors and emitters have been made over the last two decades, yet, there are still challenges and opportunities to be met in this direction. The potential of THz technologies justifies the continued efforts to develop this field as being of crucial relevance to some of the most important modern problems.
THz waves have unique properties which enable non-destructive, non-ionizing, and label-free sensing, facilitating novel applications in imaging and spectroscopy [ 88 ]. Because this frequency range has such exciting prospects, terahertz technology has clearly become an emerging field with significant growth on the scientific front. It is now entering the commercial markets for end-users in the healthcare and pharmaceutical industries [ 89 ], defence and security [ 90 ], non-destructive testing [ 90 , 91 ], telecommunications, and astronomy.
With the emergence of AI enabled systems and a fast-growing rate of terahertz technologies in research laboratories as well as in the industry, better performing sensors will be critical to the systems’ cost and size reduction and, consequently, to their wider adoption by the end-users for practical applications. Room temperature, low power, low cost, high compactness, and increased sensitivity THz sensors will provide additional and complementary data sets, which are highly desirable for the incorporation of THz technology for non-destructive testing in the real world.
Current and future challenges
The availability of THz detectors and emitters is the main limitation to the development of THz sensing technologies. THz imaging and spectroscopy systems for sensing can work either in the time domain (THz-TDS) or the frequency domain (THz-FDS). THz-TDS represent the pioneering and most developed systems for coherent, sensitive, and fast detection, which used to be bulky (Ti:sapphire laser) and are now available in reduced dimensions (ultrafast fibre lasers). THz-FDS also exists for real-time sensing and imaging. The drawback is that high sensitivity (low noise equivalent power, NEP) and/or fast detection can be achieved with cooled sensors, which often are bulky and expensive to operate.
The choice of the best THz sensor essentially depends on the target application. Amongst the most desirable parameters are high sensitivity, broad-band operation, high dynamic range, reliability, high electrical and mechanical stability, and the possibility of being combined into planar arrays. Essentially, the practical needs in terms of detector sensitivity, spectral and spatial resolutions, speed, operation conditions, and space and cost constraints will be decisive in selecting the best THz sensing system for a given application. Within less than ten years, we saw the development of hand-held THz sensing technology [ 92 ], real-time near-field THz imaging [ 93 ], phase imaging [ 94 ], holography, on-chip characterization, THz spectroscopy of biological tissues and liquids [ 95 ], and metrology. For the most recent experimental demonstrations, the trend seems to be to modify commercial detector/emitter systems and endow them with new sensing functions to reach enhanced sensitivity or resolution using elements from plasmonics, metamaterials, 2D materials, nanowires, nanoplasmonic fibre tip probes, photonic crystal fibres, microstructured waveguides, or two-channel parallel-plate waveguides. Super-resolution and ultra-compact THz sensors have also been demonstrated by rescaling existing techniques or proposing novel sensing schemes such as single-pixel imaging [ 96 ] which is shown in figure 12 , total internal reflection with photomodulation [ 97 ], or on-chip systems [ 98 ]. This indicates that terahertz sensing technologies, within only two and a half decades, are already halfway on the scale of technology readiness [ 99 ].
Advances in science and technology to meet challenges
The THz-sensing application fields that will see the most critical growth are probably the healthcare and the pharmaceutical sector (HPS), followed by nondestructive testing for the aerospace, aeronautics, automotive, and plastics industries. However, for biosensors for the HPS, compliance with regulations is mandatory, and growth may be slowed down on the path from the validation of the devices to the acceptance from the local medical personnel. On the other hand, to meet industry standards, THz sensing industry will need to move towards small size, weight, power, and cost (SWaP-c) sensors. Fast detectors that can be developed in plane-array are desirable for real-time applications where single-pixel imaging cannot be implemented. Since real-time imaging generates large amounts of data at a fast rate, AI will be necessary to incorporate into the THz sensing systems to automate complex pattern recognition and image interpretation. In general, the development of functional THz sensing systems will require a holistic strategy, one that crosses the boundaries of many emerging research fields.
Concluding remarks
Considering the essential findings and capability gaps briefly described in the previous sections, we identified an increasing demand for terahertz sensing systems in the medical sector and non-destructive testing applications. Applications in security and telecommunications are also critical and will continue to grow. A transdisciplinary strategy, in parallel with recognizing and responding to the lack of awareness of the terahertz technology in general by potential users, is also essential at this stage of the development of this technology.
Acknowledgments
All the authors of this section are either current or former members of the Laboratorio Nacional de Ciencia y Tecnología de Terahertz (Mexico) and would like to acknowledge the support from CONACYT through various grants. E C C would like to acknowledge the support from the Alexander von Humboldt Foundation.
Biomedical optical sensors
Alexis Méndez 1 and Paola Saccomandi 2
1 MCH Engineering LLC, United States of America
2 Department of Mechanical Engineering, Politecnico di Milano, Italy
Status
With a growing global population requiring healthcare and the need for ever more sophisticated diagnostic tools, clinicians worldwide are focusing on the use of advanced biomedical instrumentation and sensors as necessary and effective tools for patient diagnosis, monitoring, treatment and overall care. Many of the medical instruments in use today rely on optics (and optical components) to perform their function, and with the development of semiconductor lasers in the 1960s, modern medical optics began to take shape and coupled with the availability of optical fibres, a new generation of optical bio-medical instruments, sensors and techniques began to be developed. The advantages of optical fibres have been recognized by the medical community long ago [ 100 ]. Their initial and still most successful biomedical application has been in the field of endoscopic imaging, with the first fibre optic endoscope demonstrated in 1957 [ 101 ]. Then, during the 1980s and 90 s, extensive research was conducted to develop fibre-based biological, chemical and medical sensors [ 102 ]. Fibre-optic based sensors are ideally suited for a broad variety of—invasive and non-invasive—applications in life sciences, clinical research, medical monitoring and diagnostics, such as OCT probes, force- and shape sensing catheters in robotic surgery, intra-aortic pressure probes and temperature monitoring for thermal-based therapies for localized tumours [ 103 ].
To date, biomedical sensors based on external cavity Fabry-Perot interferometers, FBG, and spectroscopic types based on light absorption and fluorescence, are among the most researched and developed into commercial products [ 104 ]. Fibreoptic biomedical sensors often rely on the use of special coatings or small cavities holding a specific reagent that can detect a given bio-chemical analyte of interest. This is a common practice in the so-called optrodes, as well as in the use of tilted FBGs. Besides fibre-based devices, integrated optic planar devices are also an attractive and effective platform to develop biomedical sensors
Biomedical optical sensors can be categorized into four main types: physical, chemical, biological and imaging . Physical sensors measure a broad variety of physiological parameters such as body temperature, blood pressure (figure 13 ), respiration, heart rate, blood flow, muscle displacement, cerebral activity, etc. Chemical sensors rely on fluorescence, spectroscopic and indicator techniques to measure and identify the presence of particular chemical compounds and metabolic variables (pH, blood oxygen, glucose). Chemical sensors detect specific chemical species for diagnostic purposes, as well as monitoring the body’s chemical reactions and activity. Biological sensors tend to be more complex and rely on biologic recognition reactions—such as enzyme substrate, antigen-antibody, or ligand-receptor—to identify and quantify specific biochemical molecules of interest. Imaging sensors encompass both endoscope devices for internal observation and imaging, as well as more advanced techniques such as OCT, photoacoustic imaging and others, where internal scans and visualization can be made non-intrusively.
Biomedical sensors also present some unique design challenges. Sensors need to be safe, reliable, highly stable, biocompatible, amenable to sterilization and autoclaving, not prone to biologic rejection, and not require calibration or at least maintain it for prolonged times. In particular, sensor packaging is a critical aspect. It is highly desirable that sensors be as small as possible—particularly those for implanting or indwelling purposes.
Current and future challenges
Nowadays medical personnel are more reliant on advanced biomedical instrumentation and sensors as tools for patient diagnosis, monitoring, treatment and care. There is also a need for analytical instruments that can provide faster results on blood and other sample analyses, which can facilitate on-the-spot actionable diagnosis. In addition, advances in minimally invasive surgery coupled with the advent of medical robotics and computer-assisted surgical systems, is demanding the development of smaller disposable sensing catheters and sensing probes. These needs are offering many opportunities for the design and development of optical sensors but besides ensuring that the devices are safe, effective, easy to use, fast-responding, low-cost, there is also the challenge of identifying a suitable sensing technique and a platform that can be exploited for multi-parameter sensing.
Advances in science and technology to meet challenges
Among numerous innovations taking place in the fields of optics and photonics, we can identify four key breakthrough technologies that hold great potential for biomedical sensing. The first is Raman spectroscopy. Spontaneous Raman scattering is a result of inelastic light scattering processes, which lead to the emission of scattered light with a different frequency associated with molecular vibrations of the identified molecule. Several techniques have been developed to enhance the signal, such as coherent anti-Stokes Raman spectroscopy, stimulated Raman spectroscopy (SRS), resonance Raman spectroscopy, and SERS, all becoming prominent techniques for optical biosensors and bioimaging. Advances in pulsed laser sources have allowed exploring vibrational features of biological structures. Hence, SRS is commonly used as a probing technique for ultrafast and time-resolved characterization of biological systems, such as myoglobin [ 105 ]. In SERS, the inelastic light scattering by molecules is strongly enhanced (by factors up to 10 15 , reaching the single molecule level) when the molecules are adsorbed onto specific substrates of corrugated metal surfaces embedding metal (silver or gold) NPs [ 106 ]. The selective detection and localization of target molecules requires target-specific ligands for molecular recognition via non-covalent interactions, the nanotags. This technology has opened the door to other research fields, with diverse biomedical applications. An example is the development of theranostic platforms based on gold NPs which combine SERS detection for in vivo cancer diagnosis and light-based therapies (e.g. photodynamic therapy, photothermal therapy or photoimmunotherapy). Other uses are the combination with microfluidics to perform immuno- and cellular-assays, and the diagnosis of degenerative disorders (e.g. presence of amyloid proteins in Alzheimer’s diseases), infectious diseases (e.g. the presence of virus), or genetic diseases (i.e. presence of mutations in DNA). It is expected that optical SERS-based microfluidic platforms and lab-on-chip will have a substantial impact on biomedical diagnostics in the near future. Novel fibre optic-based sensing platforms are nowadays exploiting SERS for biological measurements. Tip-coated multimode fibre, liquid core photonic crystal fibre and other configurations are employed as the SERS probe for remote and multiplexed detection [ 107 ].
Nanophotonic devices, which control light in sub-wavelength volumes and enhance light–matter interactions [ 108 ], represent another key innovative technology for biomedical sensing, driven by the synergy between fibre optic sensors and nanotechnology. The advancement of deposition of nanomaterials has given a boost to the area of optical fibre sensors. Nanostructured thin films and nano-coatings, such as gold and graphene, have been applied to several optical fibre configurations for the development of new sensors, including conventional fibres (e.g. etched fibres, or multimode fibres based on SPR and localized SPR), grating-based fibre (e.g. FBGs, tilted FBGs, long period FBGs), and microstructured optical fibre for detecting multiple physical and biochemical parameters [ 109 ]. Other novel capabilities brought-on by these devices are in the form of the so-called lab-on-a-fibre (LOF) [ 110 ], where functionalized thin layers of micro- and nano-particle materials are deposited on the tip of an optical fibre.
A third driving technical advance, particularly marketwise, is presented by wearable fabrics and clothing for diverse health, wellness, sports and fitness applications. For example, skin-like wearable optical sensor patches with optical nanofibers embedded, are being proposed for continuous monitoring of temperature and respiration parameters [ 111 ]. Similarly, smart textiles with woven fibre optic sensors are also under development for monitoring physiological parameters, such as breathing and cardiac rate [ 112 ]. Wearables fitted with optical sensors represent one of the milestones towards the realization of an effective personalized medicine [ 113 ].
Lastly, small-size and flexible optical fibre sensors are increasingly entering in the design of minimally invasive medical devices such as surgical robots. Technologies based on high-density FBGs or distributed sensing, based on Brillouin and Rayleigh scattering, allow for accurate and spatially resolved information along the entire length of a surgical instrument (pressure, strain, temperature), without the use of additional devices [ 114 ].
Concluding remarks
Optics is a versatile enabling technology for the development of present and future generations of novel biomedical sensors and sensing techniques for diagnostic, therapeutic and surgical applications. Biomedical optical sensors are becoming increasingly pervasive across the medical industry, finding applications in pharma, biotech, as well as in medical wearables and surgical robotics. However, their development is not trivial and proper design, materials selection, bio-compatibility, patient safety and other key issues must be properly considered to pass industry certifications and ensure commercial success.
Single molecule detection in diagnostic assays
Qimin Quan 1 and Zhongcong Xie 2
1 NanoMosaic Inc., United States of America
2 Massachusetts General Hospital and Harvard Medical School, United States of America
Status
It is estimated by The Centres for Disease Control and Prevention (CDC) that 70% of today’s medical decisions are based on the laboratory test. About 200 proteins are cleared or approved by the US Food and Drug Administration (FDA) or approved under the Clinical Laboratory Improvement Amendments (CLIA) regulations, for detecting cardiac, cancer, diabetes, infectious and other diseases [ 115 ]. The transition from the model of intervention to prevention is an ongoing effort in the healthcare system, which is a major force that pushes the limit of diagnostic assays for better sensitivity and specificity. Protein biomarkers are exceptionally challenging since no amplification mechanisms are available to increase the copy number of proteins, unlike nucleic acids. The complex sample matrices (such as plasma/serum, cerebrospinal fluid, urine) and wide ranges of concentrations of different proteins require the detection method to have both high sensitivity and large dynamic range. Optical immunoassays are the primary method for protein quantitation and demands improvement on sensitivity and accuracy. The two fundamental steps in immunoassays are (1) capturing target analytes with high selectivity using specific affinity probes, and (2) transducing the binding events into a physical readout that is sensitive and robust to implement. Developing high affinity probes, including antibodies and their fragments, aptamers, engineered molecular constructs is an active and important field, although it is outside of the scope of current discussion. This section will focus on the readout mechanisms and discuss the current challenges and future opportunities of using single molecule optical detection (figure 14 ) to improve the sensitivity, specificity, and accuracy of diagnostic assays. Technical advances in the readout formats will fit for all types of binding modalities.
Current and future challenges
Limit of quantitation.
Cytokines are the class of proteins of lowest concentrations in the human proteome. An order-of-magnitude analysis of the nominal concentration of cytokines will be useful in determining the limit of quantitation to be reached. For example, there are approximately ∼100–1000 CD4 cells per ml of blood that produce 1000–10 000 proteins per cell, which will be secreted and diluted into 5 l of blood. Thus, cytokine detection requires a limit of quantitation down to the level of 100 fg ml −1 . Neurological biomarkers are also present at similar concentrations (100 fg ml −1 –10 pg ml −1 ) in blood since they must cross the brain-blood-barrier. Current gold standard method, enzyme-linked immunosorbent assays (ELISA), typically reaches the lower limit of quantitation at around 10 pg ml −1 . Fluorescence, chemiluminescence and electrochemiluminescence have shown better detection limit and dynamic range, thanks to the advancement in CMOS and CCD imaging technologies. However, consistent performance below pg ml −1 is still challenging.
Precision and accuracy.
Both precision (% coefficient of variation, or CV) and accuracy (mean % deviation from nominal concentration) are key parameters to evaluate the analytical performance of an assay. CV of 20% (and 25% at the lower limit of quantitation) is the FDA recommended acceptance criteria for protein assays, while a high accuracy is especially important when applied to discover and validate new biomarkers, since the fold increase (or decrease) in the disease phenotype is in many cases less than 2. Most accurate nucleic quantitation method, digital PCR, has achieved intra-CV of 2% and inter-CV of 5% [ 116 ], and is widely used in the manufacture quality control process of gene and cell therapy. It should also be noted that at ultra-low concentrations, molecular shot noise should be considered. For example, to achieve CV of 2% at 1 fM concentration, at least 2500 molecules need to be detected (square root of number of molecules) [ 117 ], which puts a fundamental limit to the minimum sample volume at 4 ul.
Absolute quantitation.
Current protein assays rely on spiking calibrator proteins into a surrogate buffer to build the standard curve. FDA guidelines recommend that the standard curve buffer should be identical or as similar as possible to the sample (especially for pharmacokinetic applications), a surrogate matrix is often used in assays since no current technology is able to detect every single protein in a given volume, unlike digital PCR. Eliminating the need for calibration curves will simplify the assay workflow, and alleviates the cost burden associated with developing standards, calibrating instruments, and bridging studies.
Multiplexing.
FDA cleared or approved (including under the CLIA program) protein biomarkers represent only 1% of the human proteome [ 115 ]. Discovery tools such as mass spectrometry can cover a few thousand proteins, still a fraction of the whole proteome. A high multiplex technology without compromising the detection sensitivity and quantitation accuracy may lead to new ways of diagnostics, where a disease phenotype is correlated to a combinatorial indication of large number of biomarkers.
Advances in science and technology to meet challenges
The effort to push the limit of immunoassays traces back to 1970s, when Harris et al [ 118 ] demonstrated an improvement of 100-fold over ELISA and reached a limit of detection 10 −21 M by radio-labelling the enzyme substrate and allowing the enzyme reactions to last several hours. Although radioactive biohazards restricted the wide application of radioimmunoassay, this work is important as it demonstrated that improving the readout will significantly improve the assay performance under the same biological conditions. By replacing the radiolabels with fluorescent detection, Rondelez et al [ 119 ] further demonstrated that confining the enzyme reactions in isolated, micrometre-sized wells will accumulate enough fluorescent signal from a single enzyme molecule. The fluorescence is detectable using conventional microscopy and the enzyme reaction time is reduced to a few minutes. Rissin et al [ 120 ] further extended the confined enzyme amplification approach and incorporated it into the ELISA workflow. Antibody coated beads are mixed at excess concentrations with the target analyte, that pushes into the Poisson distribution regime that single, or no molecule is bound on each bead. The beads are then loaded into isolated microwells where enzyme reactions are confined (figures 15 (a)–(d)). An improvement of >1000 times over ELISA is demonstrated, and this novel single molecule array (Simoa TM ) approach is coined as digital ELISA. Two other single molecule approaches are commercially available now. The single molecule counting (SMC TM ) technology replaces enzyme amplification process with confocal imaging of single fluorescent reporters [ 121 ]. Similar to the digital ELISA, antibody-analyte sandwiches are formed on beads and the detection antibodies are fluorescently labelled, which are eluted into the focus spot of a confocal microscope and counted for single molecule fluorescent events (figures 15 (e)–(g)). The nanoneedle technology (MosaicNeedle TM ) replaces both beads and fluorescent labels with nanoneedle biosensors that detect the optical spectrum shifts induced by the antibody-analyte complexed formed on the nanoneedles [ 122 , 123 ]. The nanoneedles are 100-fold smaller than the bead, hence higher multiplexing level can be achieved with significantly smaller footprint (figures 15 (h)–(j)).
The prevalence of hydrogen bonds, electrostatic interactions and salt bridges leads to a non-specific binding equilibrium constant at approximately 10 −3 M, while high affinity antibodies have equilibrium constants typically at 10 −12 M. Since high-abundance blood proteins are in the 10 −3 M range, non-specific bindings will occupy a large portion of the binding sites as the analyte concentrations fall into the 10 −15 M range (e.g. cytokines and neuro-markers). Therefore, non-specific bindings are the dominating contributor to the background noise in high-sensitivity assay.
Single molecule methods provide a mechanism to effectively increase the SNR. This is because the signals are collected from a confined area and is binary (zero or one). Ratio of signals from specific binding to that from non-specific binding is constant as the analyte concentration decreases. In contrast, SNR decreases as analyte concentration decreases in convention immunoassays based on an analogue read-out. In addition, blocking reagents and washing steps are also critical to suppress non-specific bindings. Utilizing a second antibody to form a sandwich with the analyte and its primary capture antibody also suppresses signals contributed by non-specific bindings.
Single molecule methods provide a 1:1 conversion between the readout signal and the number of analytes. In principle, calibration curve (a set of signal responses as a function of known analyte concentrations) is no longer required. However, absolute quantitation has yet been demonstrated, since only an unknown percentage of the target analytes are captured and counted. Shirai et al [ 124 ] designed a nanofluidic channel that confines the assay and detection into a 10 2 nm chamber, allowing a 100% capturing of analytes as the channels are much smaller than the diffusion length of the analytes during incubation.
Concluding remarks
Single molecule detection has shown clear advantages in sensitivity, precision and accuracy and has the potential to achieve absolute quantitation without the need for calibrators. Although no digital immunoassays have been FDA cleared or approved at the time of this review, applying them to study neurological biomarkers and cytokines, and to monitor the prognostic change of low-abundance biomarkers are widely adopted in basic and clinical research. It should be noted that the assay development life cycle has many phases including method development, pre-study validation and in-study validation. This article has focused on the method development, although a wholistic development plan and early engagement in regulatory conversations are also key to the success of pushing the diagnostic application from bench to bed.
Nanoplasmonic optical probes in biological imaging
Björn M Reinhard
Department of Chemistry and The Photonics Center, Boston University, United States of America
Status
Noble metal NPs sustain size- and shape-tunable LSPRs throughout the visible and the Near-Infrared that provide unique opportunities for biological imaging. Depending on their size, noble metal NPs provide large scattering or absorption cross-sections that facilitate their use as labels in optical microscopy, as well as in photothermal and photoacoustic imaging [ 125 ]. Furthermore, the strong E-field localization associated with LSPR excitation enables signal enhancements of Raman labels through SERS for applications in bioimaging [ 126 ]. The superb photophysical properties of noble metal NPs also permit theranostic applications in which the NPs have both diagnostic and therapeutic uses [ 127 , 128 ].
A particularly interesting property of noble metal NP probes is that electromagnetic coupling between them shifts the plasmon resonance wavelength. This coupling is distance-dependent and relevant for separations of approximately one NP diameter and below. The spectral shift of the plasmon resonance induced by a close contact between two or more NPs is detectable in the far-field and is exploited in Plasmon coupling microscopy (PCM) (figure 16 ) to detect sub-diffraction-limit proximities [ 129 ]. One important application of PCM is the detection and characterization of the spatial clustering of cell surface receptors. Although PCM does not directly resolve the actual size of cell surface receptor clusters, the resonance wavelength of the NP labels provides a quantitative metric for the spatial clustering of the NP labels, which depends on the spatial distribution of the receptors (figure 17 ). The ability of PCM to detect and characterize spatial cell surface receptor heterogeneity was evaluated using the fluorescence based super-resolution microscopy direct stochastic optical reconstruction microscopy (dSTORM) as benchmark [ 130 ]. The comparative study revealed that the spectral shifts obtained for selected breast cancer cells with different degrees of epidermal growth factor receptor (EGFR) expression by PCM were consistent with differences in average EGFR cluster size as determined by dSTORM. PCM is compatible with high throughput imaging, which makes the technology interesting for screening biological samples and characterizing cell-to-cell variability.
Another important application of nanoplasmonic optical probes is as mimics of virus particles. In artificial virus NPs (AVNs) [ 131 , 132 ], the metal NP core is encapsulated in a self-assembled hybrid membrane consisting of an inner octadecane thiol layer and an outer self-assembled lipid layer of defined composition. As AVNs consisting of a gold NP and a lipid coating have the surface properties of a biomimetic membrane and the large optical cross-sections of a noble metal NP core, they are interesting biophysical tools for investigating lipid-mediated virus—cell interactions. Specifically, AVNs were used to explore the role of the ganglioside GM3 in the glycoprotein-independent binding of human immunodefiency virus 1 (HIV-1) to CD169 (siglec1) expressing macrophages and dendritic cells and the subsequent intracellular sequestration in non-endolysosomal compartments. GM3-functionalized NPs were also shown to bind to CD169 expressing cells in the lymph nodes of mice after hock injection, facilitating the targeting of these cells in vivo [ 133 ]. In addition to assembling membranes of defined composition around noble metal NP probes, wrapping of isolated cell membranes around the NP core has been demonstrated to be advantageous for a broad range of in vivo applications [ 134 ].
Current and future challenges
The compatibility of noble metal NPs with a wide range of optical imaging modalities, in addition to their large cross-sections in electron and x-ray microscopies, make noble metal NPs versatile labels on the subcellular, cellular, tissue, and whole animal level using different imaging modalities. Unlike fluorescent probes, NPs do not bleach. Due to their multimodality and unique photophysical stability, noble metal NPs are useful probes for applications that require long, continuous illumination or that aim to elucidate the fate of functionalized NPs in biological systems ranging from cells to whole animals. One important trend in this context is the biomimetic design of NPs, such as AVNs, whose size, morphology and surface properties reflect that of biological entities like vesicles or exosomes but whose core contains a noble metal NP with strong optical properties. In these applications, the NPs are more than simply imaging labels. Instead, they represent bio-inspired hybrid materials that combine the surface properties provided by the self-assembled lipid membrane with the optical properties of the noble metal NP core to enable bioimaging and biosensing applications. The development of these technologies requires an exact understanding of the interfacial properties of the NPs and how they are affected by complex biological matrices. Eventually, NP probes with rationally designed surface chemistries in combination with enhanced NP imaging modalities will contribute to elucidating the fundamental mechanisms of NP—cell interactions and show how they affect cellular regulation mechanisms. This gain in knowledge will be instrumental in overcoming challenges associated with developing targeted NP delivery systems and new nanomedicines.
The light-dependent responses of biomimetic NPs with a noble metal core can be exploited to engineer active responses that are missing in biological nanomaterials. It is conceivable, for instance, that the properties of lipid-coated noble metal NPs can be actively modulated by light irradiation, for instance, if the irradiation results in a heating of the NPs that induces a phase change in the membrane, paving a path to adaptable biosensors and bioimaging probes.
In addition to challenges associated with understanding and controlling the interfacial properties of nanoplasmonic probes, the properties of the core also provide opportunities for further improvements. Currently, most applications of nanoplasmonic probes in biosensing and imaging are based on gold and silver NPs, which defines some fundamental limitations for the scope of the approach.
Advances in science and technology to meet challenges
Improving the control of the interfacial properties of nanoplasmonic probes is challenged by the complexity of the interface that depends on a large number of parameters. However, more efficient molecular dynamics codes and AI algorithms can be expected to advance the accuracy for modelling the interactions of nanoprobes with complex biological matrices. In the future, it may be feasible to determine the surface composition of nanoplasmonic probes for specific applications using appropriate computational tools. Improved modelling capabilities will also help to better understand how the protein corona forms around NPs in biological matrices and how it effects NP—cell interactions.
The last 10 years have seen great interest in the development of new non-metallic plasmonic materials [ 135 , 136 ], and some of these materials have potential as nanoplasmonic optical probes in biosensing and bioimaging applications. Although the plasmon resonance of gold nanostructures can be tuned over a wide spectral range by adjustment of the size and morphology of the NPs, alternative plasmonic materials provide an additional strategy to control the plasmon wavelength independent of the morphology of the NPs. This is of interest since the fate of NPs in cells or tissue depends on the size and morphology of the probes. To take advantage of alternative plasmonic materials as probes in biological imaging, scalable size and shape-selective fabrication strategies, as well as biocompatible surface passivation approaches need to be advanced.
Technological improvements in hyperspectral imaging and deep learning imaging analysis are expected to further enhance the sensitivity and fidelity of plasmonic NP imaging. In the future, a convergence of different optical NP imaging techniques, including photoacoustic, photothermal, vibrational and scattering based approaches, may result in the imaging of biological processes across multiple length scales in time and space, facilitating the investigation of biological processes from the whole organism down to the cellular level.
Concluding remarks
Localized plasmons in nanoscale particles give rise to light-dependent responses that are tunable through the morphology of the NPs. These properties make nanoplasmonic probes highly adaptable tools in different imaging modalities. Importantly, the function of the NPs is not limited to that of simple labels, but instead can involve biomimetic or photo-responsive functions that can probe biological systems in a unique way. Therefore, nanoplasmonic probes complement conventional fluorescent optical probes and can provide additional insight into complex biological systems.
Acknowledgments
B M R acknowledges support from the National Institutes of Health through Grants R01CA138509 and R01GM142012.
Spectral histopathology: a diagnostic modality with an accuracy exceeding that of combined classical histopathology and immunohistochemistry
Max Diem
Northeastern University and CIRECA LLC, United States of America
Status
Spectral histopathology (SHP) [ 137 , 138 ] is an optical, multispectral imaging technique utilizing the rich infrared fingerprint region (2.5–12 μ m wavelength) to identify differences in the biochemical composition of tissue voxels measuring approximately 10 × 10 × 5 μ m 3 . The amount of data collected from a 1 mm 2 area of tissue at 1500 infrared (colour) channels exceeds 10 7 discrete compositional data. This huge amount of data acquired from even a small piece of tissue allows multivariate analysis via self-learning algorithms to render pathological assessment with an accuracy surpassing that of classical pathology.
SHP is based on the detection of changes in biochemical composition [ 139 ], rather than morphological features, and is therefore more akin to methods such as MALDI-TOF (matrix assisted laser desorption/ ionization-time of flight mass spectrometry) imaging [ 140 ]. SHP demonstrated that changes in tissue morphological features observed in classical pathology are accompanied by changes in the biochemical composition at the cellular level [ 141 ]. Thus, these imaging methods provide novel insight into biochemical changes due to disease and—since SHP is based on a physical measurement—it renders diagnoses on a more objective and reproducible basis than methods based on assessing cell morphology and tissue architecture.
Several large studies [ 139 , 141 – 144 ] of archived patient lung tissue, in collaboration between the Department of Thoracic Surgery of the City of Hope (COH) Cancer Center in Duarte, CA, the Department of Pathology at the University of Massachusetts Medical School (UMP), Worcester, MA, and CIRECA, LLC, (then in Cambridge, MA) demonstrated that SHP can be used for the distinction of small cell lung carcinomas (SCLC), adenocarcinomas (ADCs) and squamous cell carcinomas (SqCCs) of the lung with accumulated accuracies of better than 90%. In addition, it was found that SHP can be used to resolve interobserver differences in lung pathology [ 142 ] for tissue core sections for which the COH and UMP diagnoses differed (about 15% of all cases). In these instances, SHP results mostly agree with those of IHC, considered the gold standard for discriminating non-small cell lung cancers. This is since both SHP and IHC are sensitive to the presence of particular markers that are invisible in classical histopathology [ 145 ]. Furthermore, SHP reliably classified mixed tumours such as adeno-squamous carcinomas (AdSqCC). These mixed tumours often exhibit regions that show characteristic signs of either of the two cancer classes.
Current and future challenges
Next, some representative results will be presented that show the potential and limitations of the SHP technology. These results were taken from comprehensive studies published elsewhere [ 142 ].
In the two tissue core sections shown in figure 18 , the pathological diagnoses from COH and UMP disagreed in terms of the ADC vs. SqCC assignment. In both cases, SHP agreed with the IHC results: the SHP predictions (panels (B) and (D) of figure 18 ) agree with the IHC positive areas (panel (A): TTF-1 stain for ADC and panel (C): p40 stain for SqCC) not only in the gross diagnosis, but also in the regions that show positive IHC response. This observation re- emphasizes the statement above that SHP and IHC are sensitive to the presence of specific cancer markers.
Figure 19 shows two tissue cores from the same patient biopsy, but from different regions of the tumour. Both sections were classified by pathology as AdSqCC. In both cores, IHC and SHP results indicate that the cores are nearly entirely SqCC (panels (A) and (B)) and ADC (panels (C) and (D)), dependent on the exact location from which the tissue core was collected within the tumour mass. This result indicates a ‘biphasic’ AdSqCC where cells from areas that are clearly ADC and cells that are clearly SqCC merge at the margins of separate tumours and create regions of mixed cancer. The other description (that is no longer recognized within the WHO classification system) used the term ‘admixed’ AdSqCC, in which anaplastic tumour cells that may arise from multipotent stem cells show no microscopic evidence of squamous or glandular differentiation [ 146 ].
The examples shown in figures 18 and 19 point to the advantages of the SHP methodology which is based on reproducible, machine-based data and analysis by self-learning multivariate algorithms, and is totally independent of cell morphology, staining patterns and tissue architecture. The overall accuracies in the classification of SCLC, ADC and SqCC (99.6%, 92.2 and 91.6%, respectively) increased during a five-year period these studies were carried out mostly due to increased understanding of the number of tissue classes required for reliable algorithm training. This involved the distinction between truly normal and cancer adjacent normal tissue classes, and the use of IHC to verify pathological diagnoses. These results pave the way toward a wider application of this technology once certain operational caveats discussed in the next section are addressed.
Advances in science and technology to meet challenges
There are two major aspects that need to be addressed for this technology to become more accepted in medical practices. The first of these is a matter of instrumentation for the acquisition of spectral data. Present commercially available infrared micro-spectrometers are based on interferometric methodology with cryogenic semi-conductor array detectors, or quantum cascade laser-based (QCL) systems with micro-bolometer detector arrays. The former of these methods is generally adequate in terms of data quality and reproducibility, but is too slow by at least an order of magnitude for large scale medical applications. QCL-based micro-spectrometers offer many advantages over interferometric instruments, but are at present plagued by coherence-induced artifacts and high prize, that have prevented their wide applications [ 147 ]. Collaborations between academic and industrial research will be required to address these instrumental issues.
The second aspect of methodology improvement involves the training of self-learning multivariate algorithms for the analysis of the hyperspectral datasets. Different research groups have used support vector machines, deep learning neural networks (NNs), linear discriminant analysis algorithms or other methods for this analysis. While the choice of the mathematical method appears to be less important (all of them produce comparable accuracies), the training of these algorithms is a task that requires more attention. First, the number of patients in the training and test set must fulfil standards of general medical statistics [ 143 ], and many reported results have ignored this point. Second, the aim of some research projects ‘...to detect cancer by spectral methods...’ is too narrow, since the detection of cancer can be performed very well, indeed, by classical pathology, and spectral methods must be gauged by a much higher standard, and must include the cancer type, the tumour micro-environment, necrosis and detection of immune cell activation. Inclusion of such effects require a very close cooperation with pathology, using immune-histochemistry or other advanced methods in modern histology and oncology to correlate the spectral data. This includes the use of relating the spectral data to the presence of cancer markers and/or their surrogates [ 145 ].
Concluding remarks
The results presented herein demonstrate that SHP delivers a level of diagnostic accuracy that matches that of classical histopathology combined with immunohistochemistry. This very high accuracy results from the use of inherent, spectral (optical) signatures which are manifestations of the biochemical composition of tissue pixels. These signatures can be observed with a very high degree of reproducibility.
The use of multivariate mathematical analysis transforms the observed raw spectral datasets into images that depict heterogeneity in a tumour, tumour types and sub-types, the effect of a cancerous lesion on its surroundings, and the presence of tumour-infiltrating immune cells. The spectral signatures from annotated regions of tissue have been used to train multivariate algorithms for the unsupervised diagnostics of tissue samples. Several research groups have reported such analyses for different tissue types and diseases [ 148 – 151 ]. | Data availability statement
No new data were created or analysed in this study. | CC BY | no | 2024-01-15 23:35:10 | J Opt. 2024 Jan 1; 26(1):013001 | oa_package/16/5d/PMC10726224.tar.gz |
|||||
PMC10730679 | 38130663 | Introduction
Accurate exposure assessment is central to risk management and relies upon a spatial representation of the hazard, within which exposed people and/or assets can be quantified. Regional multi-volcano analyses that consider population exposure typically consider hazard footprints as concentric radii extending from 5 km (Ewert and Harpel 2004 ) to 200 km (Small and Naumann 2001 ) from an assumed vent location. Concentric radii simplify many challenges encountered in defining hazardous areas around volcanoes, including the identification of eruption source parameters and eruption scenarios based on stratigraphic studies, and their modelling using dedicated tools. Amongst studies using radii-based approaches, the Volcano Population Index (VPI) was developed for Central America by Ewert and Harpel ( 2004 ) for radii of 5 and 10 km, with 30 km added for the ranking of US volcanoes by Ewert ( 2007 ). This was further expanded to all volcanoes in the Volcanoes of the World Database (VOTW) from the Smithsonian’s Global Volcanism Program (GVP), where the number of people within 5, 10, 30 and 100 km radii is provided as part of the general volcano information ( https://volcano.si.edu ; Global Volcanism Program 2013 ). The Population Exposure Index (PEI) of Aspinall et al. ( 2011 ) and Brown et al. ( 2015 ) subsequently used fatality-weighted population counts within 10, 30 and 100 km radii of volcanoes to rank risk to life from volcanoes. A 10 km radius was considered by all as large enough to capture the hazard footprints and populations exposed for most eruptions (i.e., Volcanic Explosivity Index (VEI) ≤ 3). A 30 km radius was considered by Ewert ( 2007 ) to capture proximal populations globally, and to provide a fair representation of exposure to life-threatening hazards accompanying eruptions VEI ≤ 4. A 100 km radius was used in the PEI to capture the majority of life-threatening volcanic hazards for most eruptions, although Brown et al. ( 2015 ) recognised that life-threatening hazards from the largest eruptions may extend beyond that. Only one study by Small and Naumann ( 2001 ) ranked volcanoes within a global study of population exposure within concentric radii. They identified Gede-Pangrango volcanic complex in Indonesia as the volcano with the highest number of people living within 100 km (29.4 million in 1990). Other global studies using concentric radii have ranked countries for their human population exposure (e.g., Brown et al. 2015 ; Pan et al. 2015 : Indonesia ranked with the highest population), while others have used concentric radii to rank volcanoes within a country or regional level study (Guimarães et al. 2021 and Nieto-Torres et al. 2021 , ranking various volcanoes in Latin America using different formulations of the risk equation).
Although concentric radii allow for comparison amongst large numbers of volcanoes, they ignore the directionality and change in intensities as a function of distance and direction from the source of most volcanic hazards. Accurate exposure estimates therefore require the spatial relationship between hazard intensity, and the distribution and characteristics of exposed assets to be accounted for. Jenkins et al. ( 2022a ) carried out probabilistic hazard modelling for 40 volcanoes in southeast Asia. Results consistently identified Merapi (Indonesia) as the volcano producing the largest exposure amongst various hazards, which is in contrast with the identification of Gede-Pangrango by Small and Naumann ( 2001 ). This observation raises questions regarding the use of radii-based studies. Hence, we investigate here how much exposure calculated from radii differs from model-based analyses that account for the spatial distribution of hazard intensity. To do so, we compare maximum hazard extents and population exposure estimates calculated from concentric radii with those calculated from the simulation of four different volcanic hazards from different eruption scenarios at 40 volcanoes in Indonesia and the Philippines (Fig. 1 ; study presented in Jenkins et al. 2022a ). The 40 volcanoes were chosen based on the occurrence of relatively large explosive eruptions (VEI ≥ 3) and proximity to population. We identify general trends across the volcanoes, and then use three case-study volcanoes—Gede-Pangrango, Cereme, and Merapi in Java—to further investigate why similarities and/or discrepancies exist between population and assets exposure estimates using the two approaches. This provides an evidence-based reference for critically interpreting existing radii-derived estimates of exposure to volcanic hazards. | Methods
Hazard modelling
This study relies on the probabilistic simulations of Jenkins et al. ( 2022a ), which assessed the hazard associated with tephra fall loads (using Tephra2; Bonadonna et al. 2005 ), impact from large clasts (Rossi et al. 2019 ), and inundation from column (Aravena et al. 2020 ) and dome (recalibrated version of LaharZ; Schilling 1998 ; Widiwijayanti et al. 2009 ) collapse pyroclastic density currents (PDC). For tephra fall, large clast impact, and column collapse PDC, eruptions scenarios of VEI 3, 4 and 5 were considered. In the absence of a relationship between VEI and volume for dome-collapse PDC, Jenkins et al. ( 2022a , b ) modelled volumes 4.5 × 10 5 m 3 and 9.8 × 10 6 m 3 , respectively corresponding to the 50 th and 90 th percentiles obtained from FlowDat (Ogburn et al. 2016 ). In addition, Jenkins et al. ( 2022a ) applied two buffers of 300 and 990 m to account for overspill of unconfined PDCs reported in Lerner et al. ( 2022 ) for studied eruptions (Merapi 2010; Fuego 2018). In total, 697,080 individual model runs were aggregated into 2,280 scenario-based probabilistic hazard footprints representing conditional exceedance probabilities of 10%, 50% and 90%. For each hazard, exposure of various assets was ranked across the 40 volcanoes. Two separate rankings were developed: a conditional ranking for each VEI or volume and an absolute probability that weighted each eruption scenario by its probability of occurrence to give an overall rank per hazard.
Exposure calculation
Population exposure was estimated using the 1 km 2 resolution LandScan 2018 dataset (Rose et al. 2019 ) in VolcGIS (Biass et al. 2022b ; Jenkins et al. 2022a ). For tephra fall, we consider exposure to accumulations of 1, 5 and 100 kg/m 2 , which range from disruptive (covering of road markings) to destructive (collapse of the weakest roofs) impacts (Jenkins et al. 2015 ). For large clast, we consider the maximum distance reached by lapilli resulting in kinetic energies at impact ≥ 30 J as a threshold for skull fracture (Yoganandan et al. 1995 ). For PDCs, we consider exposure to a binary inundation by the flow reflecting their life-threatening nature. | Conclusions and future directions
Estimating volcanic hazards as concentric radii facilitates comparison across multiple volcanoes and is therefore a popular approach in regional or global assessments. This study provides the first benchmark between this radii approach and the use of models for volcanic hazard assessments. A critical interpretation of the distances reported in Fig. 2 requires understanding the conceptual foundations for radii-based versus modelled-based hazard footprints. In most studies (e.g., Brown et al. 2015 ; Ewert 2007 ; Ewert and Harpel 2004 ), radii are identified based on a compilation of past events by Newhall and Hoblitt ( 2002 ). Although representing actual realisations of natural events, their limited witnessed occurrences might not represent the full scope of possibilities that could occur in future eruptions (Bonadonna 2006 ). Inferring a circular hazard footprint from maximum runouts of observed processes intrinsically implies an equal radial probability of occurrence, an assumption that was not made by Newhall and Hoblitt ( 2002 ) who considered directionality in the development of event-trees. In contrast, modelling footprints rely on approximations of natural processes based on a range of numerical, analytical and empirical techniques which, when combined with probabilistic modelling methodologies, allow for the exploration of possible outcomes that have not necessarily yet been realised. Regardless of the nature of the model used, modelled footprints better reproduce the directionality of hazards but are always more demanding in terms of computing power and parametrisation (number of eruption source parameters and other input conditions). Global hazard modelling is now becoming viable thanks to the increasing available computing power and dedicated open-source software (e.g., Bertin et al. 2019 ; Biass et al. 2016 ; Mahmood et al. 2015 ; Palma et al. 2014 ; Tierz et al. 2017 ), potentially opening the door to global Probabilistic Volcanic Hazard Assessments (PVHA). However, such regional to global studies require a balance between model sophistication and the computing power and input data available.
Regarding exposure analysis, results show that except for tephra, population estimates are typically larger using radii than modelled footprints. Although some relationships between the radii-derived and model-derived exposure estimates might appear similar on log-log plots of Figs. 4 and 6 , the quantitative error analysis presented in Online Resource 2 reveals a scatter that can span orders of magnitude. In addition, an agreement between both methods can be coincidental due to the distribution of inhabited areas. Consequently, we deliberately restrict the scope of our study to a direct comparison between the results of both approaches rather than overly interpreting any relationships between or within the results, and our study only intends to provide an evidence-based reference for critically interpreting existing radii-derived estimates of exposure to volcanic hazards. Should the study be used to guide future applications of radii to exposure assessment, the findings must be evaluated considering the geographic scope of our study and the magnitude of errors, i.e. the confidence intervals from Figs. 4 and 5 .
In conclusion, our study provides a benchmark for objectively comparing hazard analyses based on concentric radii and modelled hazard footprints (Fig. 2 ; Table 1 ) and reveals that: A radius of 10 km generally underestimates the extent of VEI 3 hazard footprints for column collapse PDC and overestimates the extent for tephra loads ≥ 100 kg/m 2 , large clasts impacts ≥ 30 J and dome collapse PDC. A radius of 30 km generally underestimates the footprint extent of tephra fallout for VEI ≥ 4 and represents a median value of distances reached by a 100 kg/m 2 load from a VEI 4 eruption. A 30 km radius generally overestimates the runout of column collapse PDCs (e.g., VEI 4 and 5 eruptions have only a ~ 5% exceedance probability to exceed 34 and 36 km, respectively). Only tephra fall from VEI 4 and 5 eruptions are likely to exceed distances of 100 km. VEI 5 eruptions have a 12% probability to produce tephra loads ≥ 100 kg/m 2 beyond 100 km.
Regarding population exposure in southeast Asia, our analysis suggests that: There is a general positive relationship between population exposure derived from radii and from hazard footprints for tephra fallout, column collapse PDC and large clast impacts, but not for dome collapse PDC, which is very strongly dependent upon local topography. The selected radii i) dominantly overestimate population exposure to column collapse PDC, ii) almost exclusively overestimate population exposure to large clast impacts and iii) always overestimate population exposure to dome collapse PDC. Only population exposure to tephra fallout can be underestimated by the radii approach depending on the load threshold and the VEI. These observations must be analysed in the perspective of the large (i.e., orders of magnitude) errors and the potential coincidental population distribution within concentric radii and directional hazard footprints.
In addition to the development of global hazard modelling methodologies, we identify three future research directions for global volcanic hazard, exposure and impact assessment: The development of global PVHA that accounts for the spatiotemporal probability of eruptions (e.g., Deligne et al. 2010 ; Hayes et al. 2022a ; Rougier et al. 2018 ; Sheldrake et al. 2020 ) and systematically estimate uncertainties (Marzocchi et al. 2010 ). Exposure analyses that consider more assets than only population (Biass et al. 2017 ; Hayes et al. 2022b ) , which is made possible by crowdsourcing, modern spatial data infrastructures and machine learning applied to big Earth Observation data (Biass et al. 2022a , b ; Buchhorn et al. 2020 ; Giuliani et al. 2019 ; Gorelick et al. 2017 ). The development of methodologies that estimate the potential consequences on the exposed populations and assets. This requires the parametrisation of vulnerability, which is commonly achieved using a combination of opportunistic post-event impact assessments (e.g., Elissondo et al. 2016 ; Jenkins et al. 2017 ; Magill et al. 2013 ) , experiments (e.g., Ligot et al. 2023 ; Wardman et al. 2012 ; Williams et al. 2021 ) and theoretical studies (Jenkins et al. 2014 ). Until now, the majority of studies to date have concentrated on physical impacts from tephra fall and to buildings (Deligne et al. 2022 ). New efforts must attempt capturing direct impact on other assets as well as other dimensions of vulnerability relevant to risk reduction actions (socio-economic impacts, systemic vulnerability). | Editorial responsibility: C. Gregg
Effective risk management requires accurate assessment of population exposure to volcanic hazards. Assessment of this exposure at the large-scale has often relied on circular footprints of various sizes around a volcano to simplify challenges associated with estimating the directionality and distribution of the intensity of volcanic hazards. However, to date, exposure values obtained from circular footprints have never been compared with modelled hazard footprints. Here, we compare hazard and population exposure estimates calculated from concentric radii of 10, 30 and 100 km with those calculated from the simulation of dome- and column-collapse pyroclastic density currents (PDCs), large clasts, and tephra fall across Volcanic Explosivity Index (VEI) 3, 4 and 5 scenarios for 40 volcanoes in Indonesia and the Philippines. We found that a 10 km radius—considered by previous studies to capture hazard footprints and populations exposed for VEI ≤ 3 eruptions—generally overestimates the extent for most simulated hazards, except for column collapse PDCs. A 30 km radius – considered representative of life-threatening VEI ≤ 4 hazards—overestimates the extent of PDCs and large clasts but underestimates the extent of tephra fall. A 100 km radius encapsulates most simulated life-threatening hazards, although there are exceptions for certain combinations of scenario, source parameters, and volcano. In general, we observed a positive correlation between radii- and model-derived population exposure estimates in southeast Asia for all hazards except dome collapse PDC, which is very dependent upon topography. This study shows, for the first time, how and why concentric radii under- or over-estimate hazard extent and population exposure, providing a benchmark for interpreting radii-derived hazard and exposure estimates.
Supplementary information
The online version contains supplementary material available at 10.1007/s00445-023-01686-5.
Keywords | Hazard extent
Figure 2 shows the variability in maximum distance reached by the hazard footprints simulated in our study. Tephra fall is the farthest-reaching hazard and the most variable in maximum extent reached. In the present modelling framework (i.e., fixed total grain-size distribution and the use of a time-independent analytical tephra dispersal model), the relationship between plume height and the variability in wind conditions is responsible for the different distances reached by tephra fall for each VEI. By contrast, small-volume dome collapse PDC is generally the most proximal and least variable hazard. The limited reach and variability of dome collapse PDCs reflect the strong control of the H/L parameter and the steep topography of the predominantly stratocone morphology in limiting PDC runout. For column collapse PDC, the selected model outputs inundation exceedance probabilities aggregating thousands of model runs. Since the methodology of Aravena et al. ( 2020 ) prevents the access to individual simulations, the spread of distances in Fig. 2 is based on runout distances associated with the 10%, 50% and 90% exceedance probabilities for each modelled VEI and thus does not include the smallest 10% and largest 10% of distances simulated. Overall, Fig. 2 shows that although the 10, 30 and 100 km radii capture the life-threatening hazards for most simulations, the large spread in distances reached reflects the complexity of processes governing volcanic hazards and identifies a discrepancy in exposure estimates from concentric radii.
How well do concentric radii approximate hazard footprints?
Hazard models variably account for the physical parameterization of volcanic processes as well as non-volcanic factors that influence the spatial distribution of volcanic hazards (topography or wind conditions) and are therefore expected to provide more realistic representations of hazard characteristics than concentric radii. However, a well-informed comparison requires us to review the underpinning rationale for the selection of the radii distances as proxies for hazard footprints.
Based on 191 PDCs, Newhall and Hoblitt ( 2002 ) estimated that eruptions of VEI 1–2 and VEI 3 had probabilities of producing PDC runouts exceeding 10 km of 10% and 20%, respectively, although without specifying critical aspects of the considered PDC such as the generation mechanism. This observation was the basis for the choice of a 10 km radius in the Volcano Population Index (VPI) of Ewert and Harpel ( 2004 ). By comparison, over 63% of column collapse PDC extents for VEI 3 eruptions and < 4% of dome collapse PDC simulations exceed 10 km (Fig. 2 ; Table 1 ). Newhall and Hoblitt ( 2002 ) also suggested that eruptions ( n = 39) of VEI 3 had a 40% probability of producing tephra load accumulations of at least 10 cm (i.e., ~ 100 kg/m 2 ) beyond 10 km. By comparison, 3.3% of our VEI 3 tephra simulations exceed the 10 km mark for the 100 kg/m 2 threshold. None of the smaller simulated volume dome collapse PDCs or VEI 3 large clast simulations extend beyond 10 km.
The choice of a 30 km radius by Ewert ( 2007 ) was similarly based on data from Newhall and Hoblitt ( 2002 ), who suggested PDCs from VEI 4–5 eruptions had approximately a 5% chance of exceeding 30 km runout. At a 5% exceedance probability, our simulations suggest a runout distance from column collapse PDC of 34 and 36 km for VEI 4 and VEI 5 eruptions, respectively (Table 1 ; Fig. 2 ). For dome collapse PDCs, simulated distances at the 5% exceedance probability are 3.6 km (4.5 × 10 5 kg) and 9.4 km (9.8 × 10 6 kg). Newhall and Hoblitt ( 2002 ) also indicated a ~ 10% probability of exceeding tephra accumulations of 10 cm at 30 km downwind for VEI 3 eruptions, and 80% for VEI 4 eruptions. Our results suggest that accumulations of 100 kg/m 2 beyond 30 km occur in 0% and ~ 50% of all simulations for eruptions of VEI 3 and 4, respectively.
A 100 km radius was justified in Brown et al. ( 2015 ) as capturing most PDC and lahar flow runouts, while Ewert et al. ( 2007 ) did not consider downstream flow hazards (i.e., lahars) to be captured by radii. The volcano fatality database of Brown et al. ( 2015 ) indicated that lahar or secondary lahar caused fatal events that typically extended to around 20 km, but with a range of 1–100 km. None of our PDC simulations exceed 50 km (Table 1 ; Fig. 2 ) and Jenkins et al. ( 2022a ) did not simulate lahars as their triggering mechanisms and initial conditions cannot be parametrised for such regional studies. For tephra fall, both VEI 4 and 5 eruptions reach beyond 100 km, but only VEI 5 eruptions produce a 12% probability of loads ≥ 100 kg/m 2 exceeding this distance. Thus, 100 km may be considered as a conservative maximum distance encapsulating PDC and lahar but an underestimate for potentially damaging tephra falls.
Comparing model- and radii-derived exposure estimates
Figure 3 shows population exposure within concentric radii of 10, 30 and 100 km around all 40 volcanoes in our study for the regions shown in Fig. 1 . Java and the Philippines dominate exposure within 30 and 100 km, whereas the island volcanoes of Halmahera/Banda Sea and Sulawesi have highest exposure within the 10 km radius. Some volcanoes have relatively low population exposure within 10 km (Raung and Pinatubo) but large exposure within 30 km, while island volcanoes have most of their exposure either concentrated within 10 km (Banda Api, Awu) or between 30 and 100 km away (Krakatau). As found by Small and Naumann ( 2001 ), Gede-Pangrango has the largest population exposure within a 100 km radius, which is largely attributed to its proximity to Jakarta, 60 km to the north.
Comparison with model-derived estimates
Figure 4 compares the populations exposed across the 40 considered volcanoes to concentric radii of 10, 30 and 100 km (x axis) with those exposed to probabilistic footprints of tephra fall accumulations ≥ 1 and ≥ 100 kg/m 2 , column collapse PDC inundation, and large clast impact, for each simulated VEI (3,4,5) (this study: y axis). Figure 5 shows similar data for dome collapse PDC footprints. As a good agreement might be coincidental to the specificity of the region or volcano (i.e., population distribution constrained by the geometry of landforms as a function of the directionality of hazards), we do not suggest updated radii distances that can be used globally. Instead, we use the comparison to highlight by how much exposure estimates can differ as a function of hazard and VEI, and to provide a currently non-existent evidence-based reference for the interpretation of radii-derived exposure analyses. Unless specified otherwise, the following sections discuss a 50% probability of hazard occurrence. A quantitative uncertainty analysis for all probabilities is presented in the Online Resource 2 along with regression analyses for all hazards, probabilities of occurrence and radii.
General trends
Results show that, for the most part, concentric radii are conservative and overestimate exposure, although exceptions occur for specific combinations of radii distance, VEI and hazard. Figure 4 shows a general positive relationship between population exposures estimated from concentric radii and modelled tephra fallout, column collapse PDC and large clast impacts footprints. Despite differences in exposure values varying up to multiple orders of magnitude and a large variability amongst hazards and VEI, Fig. 4 suggests that, at a very granular scale, a concentric radii approach can often distinguish high- from low-exposure volcanoes. In contrast, dome collapse PDCs show a poor relationship between exposure estimated from footprints and radii. This can be explained by their typical directionality affecting a limited number of valleys resulting in radii greatly overestimating exposure.
For life-threatening hazards at the volcanoes considered in our study, radii ≥ 30 km (for tephra fallout ≥ 100 kg/m 2 and column collapse PDC) and ≥ 10 km (for dome collapse PDC and large clasts) typically overestimate population exposure relative to the modelled footprints, particularly for VEI 3 and 4 scenarios. The large clast impact from VEI 5 scenarios shows relatively good alignment with the 10 km radius. For the lower threshold of tephra fall, where impacts may be more disruptive or damaging than directly life-threatening, concentric radii ≥ 30 km (VEI 3 and 4) and ≥ 100 km (VEI 5) appear more aligned with modelled footprints (Fig. 4 ).
There is significant variation in model- vs radii-derived tephra fall exposure at the volcano scale because of the coincidence, or not, of populations and predominant wind conditions. For example, volcanoes with large conurbations within 100 km but not in the direction of prevailing winds show much reduced exposure when tephra dispersal modelling is employed rather than radii (e.g., Pinatubo, Taal, Gede-Pangrango). Conversely, other volcanoes show increased exposure when wind conditions are considered. This is the case of Cereme, which lies approximately 100 km upwind from Bandung and nearly 200 km upwind from Jakarta, distances that are reached by tephra falls ≥ 1 kg/m 2 from VEI 4 and VEI 5 eruption, respectively. A similarity in terms of exposed population and trend (i.e., conformance with the 1:1 line) is observed for column collapse PDCs from VEI 3 eruptions with a 10 km radius and from VEI 4 and 5 with radii between 10 and 30 km (Fig. 4 ). This is a result of modelled column collapse PDCs having an almost circular footprint reaching mostly between ~ 10 and 20 km from source (Fig. 2 ; Table 1 ).
Case studies
Using a 100 km radius and 1990 population data, Small and Naumann ( 2001 ) identified Gede-Pangrango (Indonesia) as the volcano with the highest population exposure out of 1405 worldwide volcanoes. Across the 40 volcanoes considered in Jenkins et al. ( 2022a ) and using an updated 2018 population data, Gede-Pangrango ranks 8 th , 7 th and 1 st for radii of 10, 30 and 100 km, respectively, and illustrates how radii ≥ 30 km progressively include the exposure of Jakarta (10.56 MM people) and Bandung (2.45 MM people; Fig. 6 , Table 2 ). When considering hazard footprints, ranking of the population exposure of Gede-Pangrango across the 40 volcanoes considered in Jenkins et al. ( 2022a , b ) varies between 3 (VEI 5) and 8 (absolute) for tephra accumulations ≥ 1 kg/m 2 and between 3 (column collapse PDC for VEI ≥ 4) and 30 (large clast impact for VEI 3) for the other modelled hazards (Figures 12–15 of Jenkins et al. 2022a ). Figure 6 maps the relative location of the urban centres relative to flow directionality and highlights how the upwind location of Bandung and the crosswind location of Jakarta considerably reduce the total exposure to tephra fallout from Gede-Pangrango when footprints are used. In contrast, Merapi, ranking 7 when considering the exposure within a 100 km circular radius, almost always results in a higher exposure than Gede-Pangrango when modelled footprints are used, and consistently ranks within the five volcanoes with the most exposure to all considered hazards. This is due to the presence of a smaller urban centre (Yogyakarta, 0.42 MM people) within 30 km from the volcano and closer to the main tephra dispersal axis. Finally, Cereme, ranking 4 when using a 100 km buffer, is the volcano with the highest exposure to tephra fallout ≥ 1 kg/m 2 and PDC inundation for column collapse for VEI ≥ 4, capturing the exposure of Cirebon (0.33 MM people) and Bandung to large eruptions, respectively. It is however interesting to notice that when weighting the exposure from hazard footprints by their long-term probabilities of occurrences (Hayes et al. 2022a ), its relatively low eruptive frequency results in Cereme ranking > 10, whereas Merapi ranks 1 st for all hazards except inundation from column collapse PDC.
Supplementary information
Below is the link to the electronic supplementary material. | Acknowledgements
We would like to thank Eduardo Rossi for his support in implementing the large clast model, Álvaro Aravena Ponce for his support with ECMapProb, Rudiger Escobar-Wolf for providing the MATLAB implementation of LAHARZ, Chris Gregg for his seamless editorial work and two anonymous reviewers for their constructive comments. We are also indebted to Edwin Tan for his unending support in the use of the ASE/EOS High-Performance Computing Cluster, Gekko.
Funding
This research was supported by the Earth Observatory of Singapore via its funding from the National Research Foundation Singapore and the Singapore Ministry of Education under the Research Centres of Excellence initiative and comprises EOS contribution number 551. Support was provided to Susanna F. Jenkins, Josh L. Hayes, and Geoffrey A. Lerner by the AXA Research Fund as part of a Joint Research Initiative on Volcanic Risk in Asia, to Elinor S. Meredith by National Research Foundation Singapore (MOE-MOET32021-0002), and to Sébastien Biass by the Swiss National Science Foundation (Grant #200020_188757). | CC BY | no | 2024-01-15 23:35:07 | Bull Volcanol. 2024 Dec 19; 86(1):3 | oa_package/00/b8/PMC10730679.tar.gz |
||
PMC10740281 | 38129898 | Background
Juvenile idiopathic arthritis (JIA) is the most common persistent rheumatic condition in children [ 1 ]. By nature, it is chronic; in a Nordic cohort, at a time point 18 years after diagnosis, the disease was still active in 46% of patients, with 15% being treated with synthetic disease-modifying antirheumatic drugs (sDMARDs) and 19% with biologics (bDMARDs) [ 2 ].
As adolescents with JIA grow up, their disease is no longer monitored in a paediatric clinic, and the responsibility for their care is moved to an adult clinic. However, this transition involves more than just the actual point of transfer, it begins in early adolescence and will later involve the adult clinic team as well [ 3 ]. In a systemic review involving a number of chronic diseases, a structured transition was generally found to promote patients’ overall outcomes in many aspects of their transitions [ 4 ]. It has been shown that patients with JIA benefit from a planned transition; for example, the drop-out rates from care diminish [ 5 ].
JIA also involves many comorbidities [ 6 ], which increase the burden of this chronic disease [ 7 ]. One of the most common comorbidities is JIA-related uveitis [ 8 , 9 ]. Having a chronic physical condition also increases the risk of mental disorders in youth [ 10 – 12 ]. These issues place additional demands on the transition. Sufficient self-management skills form the basis of a successful transition [ 13 ]. There are many types of practice that can enhance transition readiness and improve the self-management skills of these adolescents [ 14 ].
So far, there has not been an appropriate questionnaire to evaluate the transition readiness in Finnish patients with JIA. The purpose of our study was to evaluate the self-management skills and transition readiness in Finnish patients with JIA and to estimate the usefulness and applicability of the specially designed PETRA questionnaire ( Pe diatric t ransition r eadiness to a dult care) in the Finnish health care system.
Our aim was to find practical tools to support a successful transition and to study the possible consequences of an unsuccessful transition on disease outcomes. Our aim was to improve the transition process with this pilot PETRA questionnaire and thus be able to support adolescents and their families more effectively. | Methods
This was a retrospective, real-life study based on our clinical practices in the transition of patients with JIA. PETRA questionnaire was developed and inspired by a Canadian Good 2 Go questionnaire ( www.sickkids.ca/en/patients-visitors/transition-adult-care ). This pilot PETRA questionnaire evaluates several aspects of self-management, such as independence in disease management (medication, appointments, pain control), everyday life (school, future educational plans, mental support, exercise, sexual health), and substance abuse. The paper version of the questionnaire was in routine use in our paediatric rheumatology clinic in the Hospital District of Helsinki and Uusimaa (HUS) between June 2011 and December 2013. Due to changes in the electronic patient record system, the use of this paper version remained temporary, while the transition procedures have remained essentially unchanged in our clinic. Based on the clinician’s evaluation, the questionnaire was given to adolescents who were planned to be transferred to an adult clinic, comprising patients with an ongoing disease activity and who were on systemic antirheumatic medication. Patient with disease on remission without medication [ 15 ] was not included. Altogether, 83 patients received and filled in the questionnaire as part of a routine rheumatological visit at the paediatric site.
In our final analysis, we used 13 questions, selected by consensus by an expert research team, based on opinions and psychometric evaluation, with three answer options (yes = 2, partly = 1, or no = 0). Higher scores indicated better readiness.
We also gathered information about the patients from the medical records of their first adult visit. The patients’ reported outcomes were measured using the Health Assessment Questionnaire (HAQ), the visual analogue scales (VASs) for pain [ 16 ], the global assessment of well-being, and their disease activity scores (DAS28). The physician-reported global assessment of disease activity was measured on a 21-numbered VAS scale [ 17 ]. Information about the social participation (including education, employment status and leisure-time activity) and the health behaviour (including smoking and physical activity) of the patients was also gathered. Non-restricted social participation involved engagement in studying, working, maternity leave, or military service [ 18 ].
To define the success of the transition, information was collected from both the paediatric and adult patient records. Based on a consensus of the research team’s expert opinions, with adjustments for the practices of the Finnish healthcare system, the key elements for a successful transition were defined as: [ 1 ] the patient was able to attend the first visit at the adult care centre independently, [ 2 ] the first visit took place as scheduled without extra communication, and [ 3 ] the medication was carried out as agreed at the last paediatric visit.
According to the transition practices of our paediatric rheumatology clinic in HUS, we do not transfer all patients to the adult site [ 19 , 20 ], but only those with active disease or on ongoing medication. If the disease activates later, these patients will be referred to the adult rheumatology clinic, for example, from primary health care, student, or occupational health care. A special rheumatological transition clinic is provided at the adult site [ 20 ], and special attention is paid, for example, to avoid dropping out of follow-up. If a patient does not appear for a visit as planned, the designated nurse will contact him or her. | Results
The clinical characteristics of the patients are shown in Table 1 .
Sixteen of the 83 patients who filled out the questionnaire did not need the transition to the adult clinic when they reached the transition age of 16 years. Nevertheless, 11 of these 16 arrived at the adult site during the follow-up period (Fig. 1 ). Therefore, altogether, only five patients were not transferred during this observation period.
The mean score from the transition readiness questionnaire was 22.5 (SD 2.2) and the median (IQR) was 23 (21.25). Ten patients (10%) received the maximum score 26.
Table 2 shows the individual questions contained in the PETRA questionnaire. Overall, the readiness score was satisfactory, but the questions regarding independence revealed the lowest level of skills.
The cut-off score for a successful transition by ROC-analysis was 24 (OR 6.11 (95% CI: 1.71 to 1.43)) (Fig. 2 ).
We were able to obtain all the information needed to define the success of the transition for 77 patients. In 55 (71%) patients, the transition was estimated to have been successful.
At the first adult visit, DAS28 was assessed in 58 patients. If the transition was defined as unsuccessful (score < 24), the DAS28 was higher, with a mean of 2.21 (SD = 1.14), and if the transition was defined as successful (score ≥ 24) the DAS28 was lower, with a mean of 1.35 (SD = 0.48), p < 0.001. | Discussion
In our study, the transition was classified as successful in 71% of the JIA. The main issues behind unsatisfactory results for the transition were poor adherence to medication, inability to comply with scheduled appointments, and the adolescent’s lack of independence at the visits. These are all essential elements of the self-management skills needed for a positive transition process [ 13 ]. Other self-management skills include, for example, practical abilities to manage symptoms and administer medications, as well as the skills needed to handle the stress resulting from a chronic condition [ 13 ]. During the adolescent years, complex neurodevelopmental processes occur in the brain, and the demands of managing a chronic disease can be overwhelming [ 22 ] and present specific challenges during the transition period.
In our study, unsuccessful transitions had an impact on the disease outcome. Although all the transitioned patients had low disease activity as reflected by their DAS28 value, there was, nonetheless, a significant trend showing a relationship between disease activity and success in the transition. To the best of our knowledge, this has not been studied previously in the patients with JIA.
All the patients in our study attended their first adult rheumatological appointment, although, for a few, this was later than that originally scheduled, and they needed extra communication from the adult clinic to ensure their attendance. Drop-out rate from follow-up and care is often used as an estimate for evaluating transition, and low disease activity turned out to be predictive for drop-out [ 23 ]. Past studies have presented a discouraging picture of the transition in JIA showing up to half of the transitions being classified as failures [ 24 , 25 ]. The establishment of our special rheumatology transition clinic at the adult site in 2011, and its protocols, were aimed at preventing loss from follow-up [ 20 ]. In our clinic, we transfer patients into the adult clinic at around the age of 16 years, but only patients with active disease and ongoing medication are transferred [ 19 ]. Based on the results of our previous study, we transfer around 40% of our adolescent patients [ 19 ]. Consequently, in the present study, the clinician’s decision to give the transition questionnaire only to patients with active disease and ongoing medication is in accordance with our transfer practices.
Due to changes in our hospital electronic patient record system, the use of this paper version of the questionnaire remained temporary. However, since doing this study, we have digitalized the questionnaire and separated the different age groups, that is, ages 12–13, 14–15 and over 16 years. The questionnaire can be found in Finnish on an open website funded by university hospitals in Finland ( https://www.terveyskyla.fi ). Since this digitalization of the questionnaire is relatively recent, we need further research to validate the questionnaire and expand its use to other clinics and to other chronic illnesses as well.
There are several ways to carry out the transition, and worldwide, various transition practices are used [ 26 ]. EULAR/PReS has defined standards for transitional care and it provides detailed recommendations about transitioning in JIA [ 27 ]. The transition process should start as early as possible, yet be respectful of individual developmental variations [ 27 ]. However, even when considering healthcare systems in similar societies, such as the Nordic countries, transition practices vary, as was shown in our previous study [ 28 ]. For example, in Finland, the common practice is to transfer adolescents with JIA into an adult clinic at the age of 16, whereas, in the other Nordic countries, the transition age is 18 years [ 28 ]. A German observational study regarding transition after kidney transplantation discloses that instead of focusing on the patient’s age during the transition, the focus should be on evaluating their readiness, and the transition should be implemented more flexibly [ 29 ]. An ongoing prospective cohort study is exploring transition processes in Finland and Australia, thus introducing potentially interesting cultural differences that may influence transition outcomes [ 30 ].
This is a unique study about transition readiness, which evaluates the usefulness of the questionnaire and combines data from both paediatric and adult visits. We have developed a useful and practical tool, the PETRA questionnaire, to evaluate transition readiness among JIA patients. Since our study involves a single paediatric rheumatology centre, there might be challenges in the generalisation and wider use of the questionnaire. More studies and validation are needed to explore the usefulness of the questionnaire, to expand its use more widely and to incorporate other chronic illnesses. The lack of specific data concerning uveitis and the effects of possible mental issues on the transition process can be considered limitations. Further studies that include these elements are essential. | Conclusion
In this study, we developed a usable instrument for evaluating transition readiness in JIA. Based on our findings, the timing of the transition from paediatric care to the adult site should be flexible, allowing the young person to achieve better readiness, capability, and independence in the care of their chronic disease. The determination of what constitutes a successful transition can help to identify those adolescents who need more profound support and education in improving their self-management skills and thus, enhancing their transition process. | Background
With chronic diseases, the responsibility for care transfers to adult clinics at some point. Juvenile idiopathic arthritis (JIA) is the most common persistent rheumatic condition in children. A successful transition requires sufficient self-management skills to manage one ́s chronic condition and all the tasks involved. In this study, we evaluated transition readiness in Finnish patients with JIA. We aimed to find practical tools to support a successful transition and to study the possible consequences of an unsuccessful transition.
Methods
The usefulness of a specific questionnaire, which was administered to 83 JIA patients, was evaluated in this study. We also gathered information from their first adult clinic visit to assess the success of their transition and its relation to disease activity.
Results
In 55 (71%) patients, the transition was estimated to be successful. We were able to determine a cut-off score in the questionnaire for a successful transition: the best estimate for a successful transition is when the score is 24 or more. At the first adult clinic visit, an unsuccessful transition was evident in its effect on disease outcome. If the transition was defined as successful, the DAS28 was better.
Conclusion
We found the questionnaire to be a useful tool for evaluating transition readiness. Determination of a successful transition helped us identify those adolescents who needed more profound support to improve their self-management skills and thus enhance their transition process. An unsuccessful transition was shown to negatively impact on disease outcomes.
Keywords
Open Access funding provided by University of Helsinki (including Helsinki University Central Hospital). | Analyses
Data were presented as means with standard deviation (SD) and as counts with percentages. The Kaplan–Meier method was used to estimate the crude cumulative transition rate. Receiver operating characteristic (ROC) curves were used to determine an optimal cut-off value of PETRA questionnaire for discerning successful transition. We defined the best cut-off value as the value with the highest accuracy that maximized Youden’s index (sensitivity + specificity − 1). In general, an AUC of 0.5 suggests no discrimination (that is, the ability to distinguish those patients who had successful of transition or failed to transition based on the test), 0.7 to 0.8 is considered acceptable, 0.8 to 0.9 is considered excellent, and more than 0.9 is considered outstanding [ 21 ]. The area under the curve (AUC), sensitivity, specificity, and odds ratio (OR) were calculated; 95% confidence intervals were obtained by bias corrected bootstrapping (5000 replications). We also assessed floor and ceiling effects for items and total score by calculating the proportion of patients who obtained the lowest or highest scores. The difference between the transfer groups for DAS28 values was evaluated using a t-test. The Stata 17.0 (StataCorp LP; College Station, Texas, TX, USA) statistical package was used for the analysis. | Acknowledgements
We thank all the patients involved in the study.
Author contributions
KM, KR, HK, and KA designed the study. KM, KR, KA collected data. KM, KR, HK and KA performed data analysis and data interpretation and drafted the manuscript. All authors reviewed and provided input on the final draft of the manuscript. All authors approved the final manuscript.
Funding
The first author (KM) received part time Government Research Funding.
Open Access funding provided by University of Helsinki (including Helsinki University Central Hospital).
Data Availability
Data is available upon reasonable request from the study group.
Declarations
Ethics approval and consent to participate
This is a retrospective study based on information collected during routine clinical visits. Therefore, according to Finnish legislation, there was no need for permission from the ethical committee. The study was performed according to the guidelines of the Declaration of Helsinki.
Consent for publication
Not applicable.
Competing interests
The study did not receive any financial support or other benefits from commercial sources, and the authors have no financial interest, that can create a potential conflict of interest or the appearance of a conflict of interest. The authors declare they have no competing interests.
List of abbreviations
juvenile idiopathic arthritis
Disease Activity Score
synthetic disease-modifying antirheumatic drug
biologic disease-modifying antirheumatic drug
pediatric transition readiness to adult care
hospital district of Helsinki and Uusimaa
the Health Assessment Questionnaire
visual analogy scale
standard deviation
receiver operating characteristic
area under curve
odd ratio
interquartile range
European League Against Rheumatism
Paediatric Rheumatology European Society
the International League of Associations for Rheumatology | CC BY | no | 2024-01-15 23:35:09 | Pediatr Rheumatol Online J. 2023 Dec 21; 21:149 | oa_package/fe/aa/PMC10740281.tar.gz |
PMC10750394 | 38148946 | Introduction
Major depressive disorder (MDD) is a prevalent and debilitating psychiatric disorder that affects 4.7% of the global population and is the second leading cause of disability worldwide ( Ferrari et al., 2013 ). Neuroimaging studies have made significant efforts to explore the pathology underlying MDD. Abnormal functional connectivity (FC) within and between large-scale intrinsic brain networks ( Yan et al., 2019 ; Liu et al., 2021 ; Sun et al., 2022a , b ), such as the default mode network (DMN), executive control network (ECN), and salience network (SN), has been found in MDD using resting-state functional magnetic resonance imaging (rs-fMRI). This reflects that the synchronized spontaneous activity among anatomically distinct networks is potentially linked to rumination dysfunction ( Hamilton et al., 2015 ), cognitive impairment ( Clark et al., 2009 ), and emotional dysregulation ( Zhao et al., 2021 ) in patients with MDD. However, inconsistencies in the FC of several networks like DMN, including increases, decreases, both increases and decreases, and no significant changes, have been reported in prior studies of brain networks in MDD ( Yan et al., 2019 ). This may be related to low sensitivity and reliability, as well as limited statistical power due to small sample sizes ( Button et al., 2013 ; Chen et al., 2018 ), leading to the pathophysiology of MDD remaining unknown.
According to the ICD-10, MDD can be classified as first-episode or recurrent depression ( Hiller et al., 1994 ). The risk of relapse in MDD is directly proportional to the number of episodes ( de Jonge et al., 2018 ). Compared to first-episode MDD, recurrent MDD exhibits more severe depressive and somatic symptoms, greater impairments in verbal memory, executive function, and mental representation processing ( Roca et al., 2011 ; Nigatu et al., 2015 ), as well as higher medical costs ( Kamlet et al., 1995 ; Biesheuvel-Leliefeld et al., 2012 ). Therefore, distinguishing the neuropathological mechanisms of first-episode and recurrent MDD is important for developing new and effective treatment protocols. A prior large-sample study found FC reduction of DMN in recurrent but not in first-episode MDD, which was associated with duration of illness rather than medication usage, suggesting this alteration is related to symptom severity ( Yan et al., 2019 ). Another study revealed that compared with healthy controls, both first-episode and recurrent MDD showed reduced FC in the DMN and affective network, whereas the decrease in cognitive control network only occurred in first-episode MDD ( Sun et al., 2022a , b ). Compared with recurrent MDD, first-episode MDD showed hypoconnectivity in the DMN, dorsal attention network (DAN), and somatomotor network ( Liu et al., 2021 ). However, these FC findings did not consider the direction of information communication between networks.
Effective connectivity (EC) represents the direct or indirect causal effect of one brain region on another ( Deshpande et al., 2011 ; Deshpande and Xiaoping, 2012 ). In EC methods, Granger causality analysis (GCA) is a relatively data-driven analytical method that does not require the design of a complicated task. It is more convenient for clinical application than model-driven Structural equation modeling (SEM) and Dynamic causal modeling (DCM) ( Seminowicz et al., 2004 ; Schlösser et al., 2008 ). GCA analyzes the direction of information flow between brain areas using time series of information processing and can depict resting-state directional brain networks ( Jiao et al., 2014 ). A prior study has demonstrated that the EC measure may play a more important role than FC in exploring alterations in disease brains and afford better mechanistic interpretability ( Geng et al., 2018 ). Studies on MDD have reported abnormal EC in several brain regions such as the amygdala ( de Almeida et al., 2009 ), prefrontal cortex ( Hamilton et al., 2011 ), and insula ( Iwabuchi et al., 2014 ; Kandilarova et al., 2018 ), as well as in networks such as DMN, SN, and DAN ( Guo et al., 2020 ; Li et al., 2020 ; Wang et al., 2022 ). However, the similarities and differences of EC between first-episode and recurrent MDD in large-scale networks have been less studied using GCA.
In this study, we obtained resting-state fMRI data from 839 patients with MDD and 788 matched normal controls (NCs) from the Chinese REST-meta-MDD project. We used GCA to explore alterations in the EC within and between brain networks in first-episode and recurrent MDD. We also estimated the correlation between EC and clinical assessments. Our hypothesis was that the two MDD subgroups would show different changes in intra- and inter-network EC. | Methods
Participants
We utilized rs-fMRI data from the REST-meta-MDD consortium ( Chen et al., 2023 ), comprising 1,300 MDD patients and 1,128 NCs across 23 sites. Each participant underwent a T1-weighted structural scan and an rs-fMRI scan. The patient inclusion criteria, as reported in the study ( Yan et al., 2019 ), were as follows: (1) 18 years < age < 65 years; (2) education >5 years; (3) fulfillment of the Diagnostic and Statistical Manual of Mental Disorders-IV criteria for MDD; and (4) a total score of ≥8 on the 17-item Hamilton Depression Rating Scale (HAMD) at the time of scanning. The exclusion criteria included: (1) any contraindications for undergoing MRI; (2) poor spatial normalization, coverage, or excessive head motion; (3) incomplete information; and (4) sites with fewer than 10 patients in either group. Consequently, we obtained data from 839 MDD patients and 788 NCs across 17 sites. In terms of subgroups, we compared 227 first-episode drug-naïve (FEDN) patients with 388 matched NCs from five sites, 189 recurrent MDD patients with 423 matched NCs from six sites, and 117 FEDN patients with 72 recurrent MDD patients from two sites. The HAMD and Hamilton Anxiety Rating Scale (HAMA) were employed to assess depression and anxiety symptoms in each patient, respectively.
All data were identified and anonymized. Local Institutional Review Boards approved all contributing studies, and participants signed a written informed consent at each local institution.
fMRI preprocessing
All rs-fMRI scans were preprocessed at each site utilizing the identical DPARSF protocol as reported in Yan et al. (2019) . Specifically, the initial 10 volumes were discarded and slice-timing correction was performed. Subsequently, a rigid body transformation was used to realign the time series of images for each subject. After that, individual T1-weighted images were co-registered to the mean functional image using a 6 degrees-of-freedom linear transformation without re-sampling, and then segmented into gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF). Following this, transformations from individual native space to MNI space were computed using the Diffeomorphic Anatomical Registration Through Exponentiated Lie algebra (DARTEL) tool ( Ashburner, 2007 ) and applied to individual functional images. The Friston 24-parameter model, WM, and CSF signals were removed from normalized data through linear regression. Lastly, a linear trend was included as a regressor to account for drifts in the BOLD signal and temporal band-pass filtering (0.01–0.1 Hz) was applied to all time series.
Effective connectivity analysis
We used the DOS-160 atlas ( Dosenbach et al., 2010 ) to segment the brain into 160 regions of interest (ROIs) involved in six networks: cingulo-opercular network (CON), FPN, DMN, sensorimotor network (SMN), visual network (VN), and cerebellum network (CN) ( Figure 1 ). We extracted the averaged time series for each ROI and calculated the EC between any paired ROIs using the GCA method. Then, we computed intra- and inter-network EC by averaging the connectivities between ROIs belonging to the same or different networks, respectively, and the averaged EC with each ROI or network as a seed.
Statistical analysis
We employed a linear mixed model (LMM) ( West et al., 2022 ) to compare differences in EC between MDD and NC, FEDN and NC, recurrent MDD and NC, and FEDN and recurrent MDD, respectively. The model was following: y ∼ 1 + Diagnosis + Age + Sex + Education + Motion + (1|Site) + (Diagnosis |Site), in which y represents the EC value. This yields t and p values for the fixed effect of Diagnosis ( Yan et al., 2019 ). To test relationships between EC and clinical assessments, we replaced the ‘ y ’ in the LMM with HAMD or HAMA scores, respectively. The multiple comparisons were corrected using false discovery rate (FDR) correction ( p < 0.05). | Results
Characteristics of participants
As shown in Table 1 , two MDD subgroups had no significant differences than NCs in age and gender ( p > 0.05), but showed lower education than NCs ( p < 0.001). Recurrent MDD showed a longer duration of illness than FEDN ( p < 0.001). FEDN and recurrent MDD showed no significant differences in age, gender, and education ( p > 0.05). Total MDD (mixture of FEDN and recurrent MDD) showed significant differences in education ( p < 0.001) and gender ( p = 0.005) but not age ( p > 0.05) compared with NCs.
Between-group differences in EC of large-scale brain networks
As shown in Figures 2 , 3 , total MDD showed decreased afferent EC to the CN, increased efferent EC from the FPN, and increased EC from SMN to FPN compared with NCs. When MDD was divided into two subgroups, FEDN showed increased EC from SMN to FPN compared to NCs, and decreased EC from SMN to VN relative to recurrent MDD. Recurrent MDD showed stronger efferent EC in the ventral lateral prefrontal cortex (vlPFC) with all other regions in the whole brain relative to both NCs and FEDN, and decreased afferent to the CN and efferent EC from the SMN compared with NCs. The FEDN showed no significant difference in seed-based network EC compared to NCs, and the recurrent MDD showed no significant difference in inter-network EC compared to NCs.
Correlation
The EC from CON to SMN showed a significant negative correlation ( p = 0.004, R = −0.20) with the HAMD score in the FEDN group ( Figure 4 ). No significant correlations were observed in the total MDD and recurrent MDD groups. There was no significant correlations between EC and the HAMA score in all groups. | Discussion
This study used GCA to explore alterations in EC within and between resting-state networks in FEDN and recurrent MDD patients in a large-sample Chinese population. We found that: (1) recurrent and total MDD showed altered EC in the FPN, SMN, and CN compared to NCs, while FEDN and total MDD showed altered inter-network EC from SMN to FPN compared to NCs; (2) two MDD subgroups showed significant differences in intra-network EC of the vlPFC within the FPN and in inter-network EC from SMN to VN; (3) the EC from CON to SMN showed a significant negative correlation with the HAMD score in FEDN but not in recurrent MDD group. These findings suggest that the EC among large-scale brain networks at rest was disrupted in patients with MDD, and that recurrent MDD exhibited different effective connections from FEDN.
The previous studies demonstrated that repetitive transcranial magnetic stimulation to the vlPFC reduced individual depersonalization symptoms ( American Psychiatric Association, 2013 ; Jay et al., 2016 ). Increased activity in this region is linked with increased fronto-insula/limbic inhibitory regulation ( Lemche et al., 2007 ) and may represent an increased effort to regulate emotions or be indicative of deficits in this area ( Langenecker et al., 2007 ). Compared with healthy controls and MDD patients, bipolar disorder (BD) patients showed increased ventral prefrontal cortical responses to both positive and negative emotional expressions ( Lawrence et al., 2004 ). A prior magnetic resonance spectroscopy study found that recurrent MDD showed more metabolite abnormalities in the ventral frontal cortex compared with both first-episode MDD and controls ( Portella et al., 2011 ). A recent functional near-infrared spectroscopy study demonstrated different neurofunctional activity in frontal regions in FEDN and recurrent MDD, which linked between the level of complexity activation in these regions and cognitive impairment severity of patients ( Yang et al., 2023 ). Another recent fMRI study reported that recurrent MDD had higher spontaneous brain activity in the prefrontal cortex compared to first-episode depression, which showed a positive correlation with depressive symptom severity ( Sun et al., 2022a , b ). Effective connectivity analysis revealed mutually propagating activation in ventral prefrontal cortex in people with MDD, which predicted higher levels of depressive rumination ( Hamilton et al., 2011 ). Consistently, our study found that recurrent MDD had increased EC in the vlPFC compared with both FEDN and NCs, suggesting more severe depressive symptoms in recurrent patients, possibly associated with depersonalization, emotional regulation, and rumination.
Recent studies have demonstrated that the cerebellum plays a significant role in motor control, cognition, and emotion ( Balasubramanian et al., 2021 ; Su et al., 2021 ). For example, Liu et al. (2012) found disrupted FC of the CN in adults with major depression, which could be associated with emotional disturbances and cognitive deficits. Liu et al. (2022) reported altered EC of the CN in patients with MDD, which was correlated with deficits in spatial–visual attention and psychomotor disorders. The FPN, also referred to as the executive network, plays a pivotal role in control function, execution, and emotion processing. It is strongly associated with cognitive problems in depression, especially those concerning executive functions. Dysfunctions within the FPN are likely connected to ineffective transmission of information between parietal and prefrontal regions ( Brzezicka, 2013 ). Studies also reported alterations in FC strengths in the frontal and sensorimotor networks ( Pang et al., 2020 ) and disrupted interhemispheric coordination in SMN in MDD patients ( Zhang et al., 2023 ). Moreover, individuals with long-duration MDD showed increased FC in the FPN compared to those with short-duration MDD ( Sheng et al., 2022 ). Similarly, the present study found altered EC of the FPN, CN, and SMN in total MDD and recurrent MDD but not in FEDN, which could be associated with functional impairments of cognitive processing, perception and information integration ( Lu et al., 2020 ), and treatment response-related changes in depression ( Dichter et al., 2015 ). These findings may serve as a potentially effective biomarker for recurrent MDD.
Many studies have found significantly altered connections in low-order networks such as the SMN and VN in MDD patients ( Wei et al., 2015 ; Sambataro et al., 2017 ). The sensorimotor cortex is a brain region that has attracted much attention in depression research ( Ray et al., 2021 ). Several sensorimotor interventions, including light, music, and physical exercise are known to modulate mood and depressive symptoms ( Canbeyli, 2013 ). Depression gives rise to sensorimotor alterations such as psychomotor retardation or agitation and feelings of fatigue, which are part of the diagnostic criteria for depression ( Guha, 2014 ). Previous studies have found alterations of FC and cerebral blood flow in the SMN related to psychomotor retardation in patients with depression ( Yin et al., 2018 ; Yu et al., 2019 ), while task-based fMRI studies showed differential reactions of the visual cortex in depression ( Rosa et al., 2015 ; Le et al., 2017 ). Lu et al. (2020) found reduced between-network FC in auditory and visual networks associated with depression. Kang et al. (2018) demonstrated abnormal primary somatosensory area-thalamic FC in MDD. Moreover, abnormal ECs among the FPN, VN, and SMN networks have been reported to be related to visual attention and cognitive behavior deficits in MDD patients ( Kang et al., 2018 ). Therefore, the present study observed increased EC from SMN to FPN in both total MDD and FEDN compared to NCs, which may be compensation for sensory impairments, psychomotor retardation, and cognitive dysfunction of patients. In addition, a recent study uncovered that the ECs in sensorimotor cortices may serve as a promising and quantifiable candidate marker of depression severity and treatment response ( Ray et al., 2021 ). Another study found that changes in information flow direction from SMN before and after electroconvulsive therapy were significantly correlated with improvement in depressive symptoms in MDD patients ( Kyuragi et al., 2023 ). A small-sample study found that patients with recurrent MDD showed remarkably different effective connections compared to patients with first-episode MDD, especially related to the attention network ( Wang et al., 2022 ). Thus, the increased EC from the SMN to the VN in recurrent MDD relative to FEDN in the present study may be associated with depression severity and treatment of patients. Furthermore, the EC from CON to SMN negatively correlated with the HAMD score may serve as a biomarker to predict the severity of MDD.
Limitations
The present study has several limitations. First, the correlation analysis relied solely on HAMD scores of depression. There are a large number of rating scales for assessing depression severity, and each with its own advantages and limitations. Thus, the present neuroimaging findings could be further validated with a combination of observer rating scales and objective behavioral measures of depression ( Lahnakoski et al., 2020 ). Second, we were unclear about the medication history of the recurrent MDD patients, and therefore the present findings are in need of replication. Third, MDD patients in the present study were the Chinese populations, which might not be generalized to other regions or populations. Fourth, the use of LMM should be discussed with regard to its potential limitations, such as its comparison to other methods or its applicability to this specific study. Finally, as a cross-sectional study, changes in connections with disease progression cannot be thoroughly reflected by the limited nodes. Further efforts, such as intervention studies with comparisons before and after medication, are required to draw valid conclusions on the impact of EC. | Conclusion
The present study used the GCA method to investigate differences in EC of large-scale brain networks in FEDN and recurrent MDD patients. We found that recurrent MDD showed altered EC in the FPN, SMN, and CN, while FEDN showed altered inter-network EC from SMN to FPN compared with NCs. Meanwhile, the ECs within FPN and from SMN to VN displayed significant differences between two MDD subgroups. Moreover, the EC from CON to SMN showed a significant negative correlation with HAMD scores in FEDN but not recurrent MDD group. These findings suggest that first-episode and recurrent MDD may have different effective connectivity patterns among large-scale brain networks, which may serve as potential biomarkers for diagnosing MDD. | Edited by: Zhiyong Zhao, Zhejiang University, China
Reviewed by: Chao Li, The First Affiliated Hospital of China Medical University, China; Lanxin Ji, New York University, United States; Zhengyuan Yang, University of Macau, China
† These authors have contributed equally to this work
Introduction
Previous studies have shown disrupted effective connectivity in the large-scale brain networks of individuals with major depressive disorder (MDD). However, it is unclear whether these changes differ between first-episode drug-naive MDD (FEDN-MDD) and recurrent MDD (R-MDD).
Methods
This study utilized resting-state fMRI data from 17 sites in the Chinese REST-meta-MDD project, consisting of 839 patients with MDD and 788 normal controls (NCs). All data was preprocessed using a standardized protocol. Then, we performed a granger causality analysis to calculate the effectivity connectivity (EC) within and between brain networks for each participant, and compared the differences between the groups.
Results
Our findings revealed that R-MDD exhibited increased EC in the fronto-parietal network (FPN) and decreased EC in the cerebellum network, while FEDN-MDD demonstrated increased EC from the sensorimotor network (SMN) to the FPN compared with the NCs. Importantly, the two MDD subgroups displayed significant differences in EC within the FPN and between the SMN and visual network. Moreover, the EC from the cingulo-opercular network to the SMN showed a significant negative correlation with the Hamilton Rating Scale for Depression (HAMD) score in the FEDN-MDD group.
Conclusion
These findings suggest that first-episode and recurrent MDD have distinct effects on the effective connectivity in large-scale brain networks, which could be potential neural mechanisms underlying their different clinical manifestations. | Data availability statement
Publicly available datasets were analyzed in this study. This data can be found here: http://rfmri.org/REST-meta-MDD .
Author contributions
YZ: Writing – original draft, Methodology, Software, Writing – review & editing. TH: Methodology, Writing – original draft. RL: Writing – original draft, Writing – review & editing. QY: Writing – original draft, Writing – review & editing. CZ: Writing – review & editing, Methodology, Software. MY: Writing – review & editing, Methodology, Software. BL: Writing – review & editing, Formal analysis. XL: Writing – review & editing. | Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | CC BY | no | 2024-01-15 23:35:07 | Front Neurosci. 2023 Dec 11; 17:1308551 | oa_package/5b/7d/PMC10750394.tar.gz |
PMC10755656 | 38096002 | Introduction
Adherence to Oral Anticancer Treatments
Metastatic breast cancer (MBC) is an incurable disease, wherein the available medications are primarily focused on deferring disease progression and symptom mitigation, thereby prolonging survival rate as well as preserving the quality of life (QoL) and psychological well-being [ 1 , 2 ]. The clinical advancements achieved in anticancer treatments have increased the survival rates of patients with MBC. The 5-year survival rate of patients with MBC is around 38% [ 2 ]. Notwithstanding, several studies [ 3 - 6 ] have shown that adherence to anticancer treatments is a critical issue in the disease trajectory of patients with breast cancer, especially regarding oral anticancer treatments (OATs), which are intensely demanding because patients are responsible for assuming medications according to the medical prescriptions, thereby increasing the risk of not appropriately taking the therapy [ 7 ].
From a theoretical perspective and according to the World Health Organization’s recommendations [ 8 ], medication adherence might be explained by a set of mutual and interconnected determinants incorporating (1) sociodemographic (eg, age, gender, socioeconomic status), psychocognitive, and social variables (eg, psychological well-being, social support); (2) disease and treatment-related characteristics (eg, cancer stage, prognosis, dosage, side effects); (3) attitudes, beliefs, and values; (4) health literacy and knowledge; and finally (5) health care system–related factors [ 7 , 9 , 10 ]. Accruing evidence have highlighted that patients with advanced cancer might report significant levels of nonadherence because in case of some cancers such as MBC, the patients have to change the type of treatments or dosage frequently and they experience a high level of fear of cancer spread [ 3 ]. For example, Yerrapragada and colleagues [ 6 ] reported nonadherence to tamoxifen in patients with MBC, ranging between 30% and 85% and further reducing over time. In addition, patients with MBC reported several barriers to the daily management of OAT, such as emotional (eg, worry, depression) and physical distress related to the side effects (eg, fatigue, weakness, sleep disturbance, emotional burden, pain) and lack of knowledge about their disease. Further, patients might experience a lack of control and a lack of perceived benefits during the disease pathway, and they may experience difficulty in managing therapy [ 3 , 6 ]. Other studies have shown that patients with MBC experience modest QoL due to the related treatment side effects, financial burden affecting therapy discontinuation, and a significant level of nonadherence [ 4 , 7 ]. Marshall and colleagues [ 4 ] observed that patients with MBC who frequently report treatment concerns rather than treatment benefits are less likely to be adherent to prescriptions, as well as patients who experience a more significant number and severity of side effects tend to have more medication worries that could negatively impact adherence.
Risk-Predictive Models and Decision Support Systems
Given the impact of nonadherence on clinical outcomes and the associated economic burden on the health care system [ 11 , 12 ], finding effective ways to increase treatment adherence is particularly relevant. Patients who adopt nonadherent behaviors need support in managing oral therapies and in overcoming individual and systemic barriers and roadblocks [ 5 ]. Notwithstanding, the dynamics influencing adherence to OATs are understudied in the cancer field [ 13 ], and a comprehensive model of the risk determinants of nonadherence is not currently available, particularly for targeted therapies and novel generations of hormonal therapy [ 14 ]. Besides, a shared definition of medication adherence and satisfactory assessment tools is not acknowledged, which vary in terms of accuracy and reliability [ 14 - 17 ], even in the explicit case of the equivalent treatment protocol and diagnosis [ 13 , 18 ]. Because of the direct costs of nonadherence (eg, survival rate, health-related QoL for patients with MBC) and the indirect costs for the health care system (eg, economic burden), it is mandatory to identify potential risk factors of nonadherence to OATs and to define personalized interventions supporting adherence during the clinical pathway in patients with MBC.
Consistently, defining, measuring, and developing a comprehensive model of medication adherence based on real-world data predictive models is a crucial clinical, psychological, social, and economic challenge. Machine learning has become an integral part of the health care industry—from biomedical research to the delivery of health care services. Compared to traditional statistical methods, machine learning provides numerous advantages such as increased flexibility, prediction accuracy, possibility of automation, and processing of big data. Prediction models for adherence have already been developed and tested in various scenarios [ 19 - 22 ]. If adequately reported, these models can help guide treatment decision-making, improve patient outcomes, and streamline perioperative health care management. Considering the complexity of medication nonadherence in patients with MBC, it is critical to identify patients at risk of nonadherence and carry out timely, precise, and tailored interventions to improve their adherence. Through machine learning models, it is possible to provide personalized prediction on medication adherence for a given patient, supporting adherence and performing a specific intervention [ 22 , 23 ].
A growing body of studies have underlined that eHealth technologies (eg, mobile apps, web-based solutions, wearables) might be helpful tools to foster patient management and engagement in clinical decisions during the cancer pathway [ 24 ]. Different web-based solutions based on educational and behavioral interventions have been developed for patients with breast cancer to foster medication adherence [ 25 , 26 ]. For example, the Multinational Association for Supportive Care in Cancer has developed an educational and teaching tool for patients with cancer receiving OATs that is composed of different educational sections aimed at assessing general patient knowledge about their treatment protocol and drug information (eg, side effects), skills in the management of therapy, possible strategies to manage nonadherence occurrences, and a specific questionnaire section to evaluate patient comprehension [ 26 ]. Moreover, Omaki and colleagues [ 27 ] have developed a patient decision aid named “My Healthy Choices” to foster adherence to pain treatments, assessing the environmental and personal risks and setting patient treatment priorities.
Nevertheless, no study has been conducted on patients with MBC to foster medication adherence to OATs through the clinical care pathway based on designing and developing a decision support system (DSS), integrating risk predictive models and educational and training tools. The information embedded in a DSS solution designed and developed according to the needs of patients with MBC might enable users to be better informed, develop more accurate expectations of the benefits and harms, and increase participation in the decision-making processes and medication adherence [ 28 ]. Evidence shows that, when implemented on web or mobile apps, DSS may support patients and physicians by improving adherence to medical treatments [ 27 , 29 , 30 ].
The Pfizer Project (65080791)
Drawing from the theoretical framework described above, we present and explain the study protocol of a prospective, randomized controlled study that is nested in a large-scale international project named “Enhancing Therapy Adherence Among Metastatic Breast Cancer Patients” (Pfizer 65080791) aimed to develop a predictive model of nonadherence and an associated DSS and guidelines to foster patients’ engagement and therapy adherence among patients with MBC concerning oral chemotherapy, endocrine therapy, supportive care, and the treatment of comorbidities. Consistently, the Pfizer Project (65080791) has been organized into 2 different studies to achieve the key goal. A retrospective study has been designed and a model has been developed to predict adherence to OATs, which use existing physiological, clinical, and QoL data available in the European Institute of Oncology (IEO; Milan, Italy). More in-detail, multimodal retrospective data have been retrieved from patient electronic health records by using natural language processing in a sample of 2750 patients with MBC (from 2010 to 2020). Data included in the analysis have been sociodemographic variables, diagnosis, biochemical and medical tests, procedures and medical history, treatment programs, treatment side effects, comorbidities, and familiarities. Concerning adherence to the treatment protocols, data included the following dimensions: initiation of the treatment, interruption of treatment, and skipped treatment doses. Furthermore, a prospective study is designed to assess the effectiveness of a DSS web-based solution and to enrich the predictive power of the machine learning model to forecast adherence behavior in patients with MBC. The tuning of the model permits adding additional predictors (personality traits, self-efficacy for coping with cancer, sense of coherence, pain, anxiety, depression, risk perception, and QoL) known to influence medication adherence behavior and that are not available retrospectively [ 10 ]. These data are used to improve the predictive power of the machine learning model and its capacity to profile patients’ adherence behaviors and to provide an individual risk value of nonadherence. | Methods
Primary End Point Analysis
The primary end point was to assess the effectiveness of the DSS web-based solution and the machine learning web application in promoting adherence to OATs in a sample of 100 patients with MBC at 3 months. The adherence is evaluated using the number of pills taken divided by the number of pills prescribed. Further, the adherence is assessed using behavioral measures: the 8-item Morisky Medication Adherence Scale (MMAS-8) [ 31 ] and Adherence Attitude Inventory (AAI) [ 32 ].
Secondary End Point Analysis
The secondary end points were to identify clinical (comorbidities, presence of pain, tumor type, type of treatment), psychological (personality traits, anxiety, depression, self-efficacy for coping with cancer, sense of coherence, and risk perception), and QoL variables predicting patients’ adherence to OATs. These variables are used as predictors for evaluating nonadherence to OATs among patients with MBC and for enriching the preliminary version of a machine learning model developed in the retrospective study. Our initial machine learning models that are evaluated as an intervention in this study were built based on variables extracted from clinical notes by using natural language processing. Once the data collection for this study is complete, we will also use the data collected to improve our initial machine learning models. Data for the secondary end points are collected using the European Organization for Research and Treatment of Cancer QoL Questionnaire Core 30 (EORTC-QLQ-C30) [ 33 , 34 ], European Organization for Research and Treatment of Cancer QoL 23-item Breast Cancer-specific Questionnaire (EORTC-QLQ-BR23) [ 35 ], and the Brief Pain Inventory (BPI) [ 36 ]. To evaluate psychological variables, we used the State-Trait Anxiety Inventory (STAI-Y) [ 37 , 38 ], Beck Depression Inventory-II (BDI-II) [ 39 , 40 ], Big Five Inventory (BFI) [ 41 ], Brief Italian version of Cancer Behavior Inventory (CBI-B/I) [ 42 , 43 ], 13-item Sense of Coherence scale (SOC-13) [ 44 ], and risk perception (using 2 visual analogue scales) [ 45 , 46 ].
Study Design
Treatment Adherence DSS
The first version of the web-based DSS, namely, TREAT (treatment adherence support) was designed and developed in the first year of the Pfizer Project (65080791) by using a patient-driven approach. The development phase was systematized in 5 steps. First, a systematic review of the literature was conducted to explore the main issues related to the interventions fostering adherence in patients with breast cancer [ 47 ]. Second, 4 focus group studies with 19 patients with MBC (mean age [years] 55.95, SD 6.87; range 46-70) with different metastasis localizations were implemented in order to explore patients’ unmet needs related to the MBC disease and knowledge about the adherence, barriers, roadblocks, and resources related to OATs and to examine the role of technologies and decision support as aids fostering adherence behaviors during the care pathway [ 45 ]. The first and second steps informed the third step in which a preliminary set of mock-ups of TREAT using Wikimedia technology was shaped. TREAT was developed as a web-based application accessible by patients through a web link, without personal registration, and in Italian according to the patient’s user approach. The information contained in TREAT was constructed using the international guidelines for patients with MBC (eg, European School of Oncology-European Society for Medical Oncology 2nd international consensus guidelines for advanced breast cancer) [ 1 , 46 ]. Further, the information was organized using different setups (written text, figures, flowcharts, graphs, and tables). In the fourth step, this preliminary version was revised by an internal and multidisciplinary review group (oncologists, psychologists, technicians, and patients). Finally, the TREAT’s revised version was translated and published.
Currently, TREAT is organized into 4 key sections. Section A on metastatic breast cancer provides information on MBC disease (eg, definition, clinical management), physical consequences (eg, pain, weight loss, lack of energy), psychological consequences (eg, anxiety, depression, fatigue), anticancer treatments and the associated side effects, and benefits that can be experienced during the care pathway. Section B on adherence to cancer therapies discusses the meaning of nonadherence and its consequences, incidence in the population with cancer, and determinants of medication adherence according to the World Health Organization’s approach. Section C on promoting adherence provides information about the resources (eg, personal beliefs, social support, trust in the health care providers), barriers (eg, distress, inadequate knowledge) of medication adherence, and the different available interventions (eg, educational, affective, behavioral) to promote patients’ adherence. Further, TREAT delivers specific self-managed suggestions to mitigate potential risk factors associated with nonadherence and an example of a specific training based on a goal-setting approach. In Section D, that is, My Adherence Diary, patients are invited to write a free-text diary reporting doubts, concerns, thoughts, and behaviors related to their disease and therapies, sharing this information with their oncologists in the following clinical consultation ( Figures 1 - 2 ).
We also developed a machine learning application, which will be tested with the DSS. This application focuses on models to predict adherence risk factors and used data extracted from the electronic health records of the IEO. Specifically, the machine learning models focused on 2 outcomes: short-term and long-term side effects as well as physical status and comorbid conditions. We evaluated the models predicting performance by utilizing metrics such as area under the curve, precision, recall, sensitivity, specificity, κ, and positive and negative predictive values. Based on these criteria, we identified the top performing models, which were integrated into a web application built using the Shiny open source framework for the R statistical language (R Core Team) [ 48 ]. We designed this application for shared decision-making sessions between patients and oncologists. It also uses the Shapley interpretable machine learning algorithm, which offers insights into the specific risk factors that played a role in predicting outcomes for individual patients.
Participants
A sample of 100 patients enrolled consecutively in May 2023 in the IEO at the Division of Medical Senology with an MBC diagnosis. The enrollment started in May. Specifically, the observed outcomes of 50 patients with MBC exposed to the DSS (experimental group) and 50 patients with MBC not exposed to the intervention (control group) are evaluated.
Inclusion and Exclusion Criteria
The inclusion criteria were female patients with MBC, 18 years of age and older, with a prescription of oral treatment (eg, oral chemotherapy, endocrine therapy, cyclin-dependent kinase 4/6 inhibitors), having internet access and a personal smartphone or tablet, and ability to read and sign informed consent. The exclusion criteria were the presence of any primary psychiatric or neurological conditions.
Procedure
Randomization
Patients who signed the informed consent are given a unique identifier and assigned to either the control or intervention arm in a 1:1 ratio. First, the system asks to confirm all the inclusion and exclusion criteria. Then, an independent researcher generates a random sequence using the statistical language R (R Core Team 2020). Our randomization schedule involves an undisclosed blocking size that the data science team calculates without stratification. A patient is considered randomized when the randomization system assigns the patient an identification number according to a pre-established randomization list. We have created 2 different groups: the experimental group (n=50), in which each participant receives the link for the DSS and is trained to use the DSS for 3 months; and the control group (n=50), in which each patient receives standard care and suggestions bolstering adherence. Among all participants, the personal nonadherence risk is calculated through the preliminary version of the machine learning model to predict patients’ nonadherence behavior.
Recruitment and Follow-Up
Patients admitted to the Division of Medical Senology of the IEO are enrolled by the oncologist during the formal clinical consultation. Patients are enrolled even if they already had oral treatments in the past and they are switching to a new one. The patients who show interest in this research have a phone call with a trained psychologist to receive further information about the Pfizer study. If patients decide to participate in this study, the psychologist plans a consultation to receive informed consent and a preliminary assessment. Patients are informed that TREAT will not replace the clinical consultation. However, it should help manage oral treatment and improve adherence through education using evidence-based information. Patients may decide to discontinue their participation in the trial without any penalties. Patients who refuse to participate in this study are asked to complete a short refusal survey regarding the main reasons for not participating and their demographic information in order to assess the potential differences among the participants. Furthermore, patients have to contact the pertaining division for any necessity. Data are collected by the REDCap (research electronic data capture) platform and stored centrally by the IEO.
Measures
Assessment of Nonadherence to OATs
This study uses the operational definition of adherence provided by the International Society for Pharmacoeconomics and Outcomes Research Medication Compliance and Persistence Work Group, which defines adherence as “the extent to which a patient acts in accordance with the prescribed interval and dose of a dosing regimen” [ 49 ]. Coherently, the nonadherence to OATs is evaluated using a prospective method [ 15 ] by weekly medication diaries and 2 self-reported measures.
Adherence Medication Diary
A weekly paper diary is given to patients to monitor their medication intake. The diary will assess if medications are taken as prescribed and the possible reasons for not taking pills (eg, side effects, forgetfulness, no pill refill). The data collected refer to the number of pills not taken per week and the number of pills established according to the patient medical prescription protocol per week as well as planned interruptions. Further, the diary evaluates and monitors the side effects, their intensity (from 0 to 10) associated with the therapy intake, and their emotions, thoughts, behaviors, and consequences.
MMAS-8 Self-Report Questionnaire (© 2006 Donald E Morisky)
The MMAS-8 scale [ 31 , 50 , 51 ] is an 8-item self-report questionnaire evaluating treatment adherence (forgetfulness, medication-taking behavior, adverse effects, and problems) (Cronbach α=.83). The first 7 items have dichotomous responses (0=Yes, 1=No), and the last includes a 5-point Likert scale response [ 52 ].
AAI Self-Report Questionnaire
The AAI is a 28-item self-report questionnaire on a 5-point Likert scale response (from “does not fit” to “perfect fit”) evaluating treatment adherence (Cronbach α=.80) [ 32 ]. The AAI is organized into 4 subscales: cognitive functioning (evaluating the ability to remember issues and tasks related to adherence in the short and long term), patient-provider communication (evaluating thoughts, attitudes, feelings, and ideas related to adherence between patient and medical provider), self-efficacy (evaluating the belief in one’s ability to adhere to medication and a history of similar success), and commitment to adherence (evaluating the determination to overcome obstacles to achieve adherence).
Demographic, Clinical, Emotional, and QoL Assessments
Considering that adherence to OATs is explained by a set of mutual and interconnected determinants, a comprehensive pool of self-reported measures is administered to identify the psychological predictors of nonadherence in patients with MBC.
Patient Demographic and Clinical Variables
Age, gender, education, marital status, cancer diagnosis and staging, oncological treatments, BMI, alcohol and smoking habits, and comorbid medical disorders are collected through electronic medical records.
STAI-Y Self-Report Questionnaire
The STAI-Y is a 40-item self-report questionnaire on a 4-point Likert scale (Cronbach α=.89). The STAI-Y evaluates both trait anxiety (20 items) and state anxiety (20 items) [ 37 , 38 ].
BDI-II Self-Report Questionnaire
The BDI-II is a 21-item self-report questionnaire on a 4-point Likert scale response evaluating depression (Cronbach α=.89). The BDI can be administered to adults and adolescents aged 13 years and older [ 39 , 40 ].
CBI-B/I Self-Report Questionnaire
The CBI-B/I is a 12-item self-report questionnaire on a 7-point Likert scale response (from not all confident to totally confident) evaluating self-efficacy for coping in patients with cancer (Cronbach α=.84). The CBI-B/I is composed of 4 subscales: coping and stress management, maintaining independence, managing affect, and participating in medical care [ 42 , 43 ].
SOC-13 Self-Report Questionnaire
The SOC-13 is a 13-item self-report questionnaire on a 7-point Likert scale response (Cronbach α=.76). The SOC-13 is composed of 3 subscales: comprehensibility, manageability, and meaningfulness [ 44 ].
BPI Self-Report Questionnaire
The BPI is a 9-item self-report questionnaire evaluating pain intensity during the past 24 hours (Cronbach α=.91) [ 36 ].
EORTC-QLQ-C30 Self-Report Questionnaire
The EORTC-QLQ-C30 is a self-report questionnaire composed of 28 items on a 4-point Likert-type scale (ranging from not at all to very much) and 2 items, that is, general global health status and QoL on a 7-point Likert-type scale (ranging from very poor to excellent) [ 33 , 34 ]. The EORTC-QLQ-C30 provides information on 3 areas: functional (physical, role, cognitive, emotional, and social), symptoms (appetite loss, fatigue, pain, nausea, constipation-diarrhea, dyspnea, and insomnia), and global health status or QoL (Cronbach α=.85) [ 32 , 33 ]. Further, the EORTC-QLQ-B R23 for patients with breast cancer has been used previously (Cronbach α=.87) [ 34 , 35 ].
BFI Self-Report Questionnaire
The BFI is a 44-item self-report questionnaire on a 5-point scale (from disagree strongly to strongly agree) that assesses 5 dimensions of personality: openness to experience (Cronbach α=.78), conscientiousness (Cronbach α=.81), extraversion (Cronbach α=.87), agreeableness (Cronbach α=.81), and neuroticism (Cronbach α=.82) [ 41 ].
Risk Perception
Risk perception is evaluated using 2 visual analogue scales (from 0 to 100): one for the objective and one for the subjective risk perception. These items were developed using the Weinstein approach [ 45 , 46 ].
Timeline
There are 3 assessment time points. At the baseline (T0), all patients fill a set of validated measures (MMAS-8, AAI, STAI-Y forms I and II, BDI-II, CBI-B/I, SOC-13, BPI, EORTC-QLQ-C30, EORTC-QLQ-BR23, BFI, and visual analogue scale). The expected time to complete all the given questionnaires at baseline is approximately 40 minutes. Access to DSS is given to the experimental group for 3 months. At T1 and T2, the following questionnaires are filled: MMAS-8, AAI, STAI-Y form I, BDI-II, CBI-B/I, EORTC-QLQ-C30, and EORTC-QLQ-BR23. Further, all patients have to fill a weekly adherence medication diary for 3 months. Variables that are not sensitive to change, for example, personality (BFI) and anxiety traits (STAI-Y form II) are collected only at T0. Each month, all participants receive a brief telephone interview in which they are monitored for adherence to the research protocol. Two psychologists perform the monthly telephone interview to monitor the filling out of the questionnaires and the medication adherence diary supporting patients with MBC in this task, barriers, and concerns related to the study participation. At T3 (3 months), all behavioral and psychological measures are filled, and an interview (online or vis-à-vis) is performed.
Calculation of Sample Size
The sample size calculation was based on estimates for the effectiveness of TREAT as the primary outcome, assuming that the final analysis would be performed with a 2-sample 2-sided t test, a power of 96.4%, an α of .05, a standard deviation of 1.2, and a minimal difference in outcomes of 0.51 (effect size of 0.42) based on the patients’ self-reported satisfaction with the treatment decision [ 53 ]. Under these considerations, the final sample size was calculated to be 100 patients with MBC, with 50 individuals per group (50 patients with MBC exposed to DSS and 50 patients with MBC not exposed to the intervention). This sample size is expanded to 120 when a 20% attrition rate is used.
Statistical Considerations
Exploratory Analysis
Our exploratory analysis will start with a visual exploration of all variables to evaluate the frequency, percentage, and near-zero variance for categorical variables (eg, gender, cancer stage); distribution for numeric variables (eg, age); and their corresponding missing value patterns [ 54 ]. Near-zero variance is found when a categorical variable has a small percentage of a given category and will be addressed by recategorization. We will consider variable transformations such as logarithmic, Box-Cox, or categorization of numeric variables that do not present a normal distribution. Missing values will be handled through imputation algorithms followed by sensitivity analyses to verify whether our results are stable with and without imputation [ 53 ]. Comparisons for the exploratory analysis will be conducted through analysis of variance (2-sided t tests being a category of analysis of variance) and chi-square tests (Fisher exact test when any cell presented a frequency below 5).
Propensity Score
For residual confounding after randomization, we will use propensity scores based on inverse probability of treatment weighting [ 55 ]. We will first assess the covariate balance between control and intervention groups through standardized mean differences and differences in proportion. Covariates will include the patient demographic and clinical variables. Values greater than 0.1 will signal an imbalance in covariates. Next, we will use inverse probability of treatment weighting generated through logistic regression to balance the covariates [ 56 ]. Once we achieve covariate balance, we will estimate the effect for each outcome by using a double robust approach.
Generalized Linear Models
Additional analyses will be run using generalized linear models with a Gaussian distribution family (multiple linear regression) being adjusted for randomization baseline imbalances. These models will evaluate the association between all previously reported outcomes (the number of pills taken during the prescribed interval, MMAS-8, AAI, STAI-Y, BDI-II, CBI-B/I, SOC-13, BPI, EORTC-QLQ-C30, EORTC-QLQ-BR23, BFI, and risk perception) and the intervention (TREAT and machine learning web application), accounting for the baseline differences. We will build one model for each combination of predictor and outcome, where the outcome will be the dependent variable (Y) and the predictor will be the independent variable (X), adjusted for the covariates unbalanced at baseline. We will also evaluate models with a Poisson or negative binomial distribution for non–normally distributed outcomes. Results will be reported as predicted means along with 95% CIs.
Ethics Approval
This study is compliant with the recommendations outlined in the Helsinki Declaration [ 57 ] and the Council for International Organizations of Medical Sciences guidelines [ 58 ], as well as with the principles of biomedical ethics reported in the Belmont report [ 59 ]. This study presents a fair balance between risks and benefits for study participants and future patients affected by the same condition. No physical risks directly related to participation in the research are expected. Although psychological risks are not expected, in case these raised from participation, psychologists responsible for the study will promptly intervene and take care of the patient. Regarding the benefits, given the impact of nonadherence on clinical outcomes, finding effective ways to increase treatment adherence is particularly relevant for the community of patients with MBC, thus playing a role in the improvement of the patient’s general well-being. The principle of self-determination is also respected. A devoted informed consent form is signed by participants before participation. Signing the informed consent form is preceded by a dialogic consent process necessary to ensure informed, voluntary, and awareness of participation in research. Regarding the respect for privacy, this study is performed according to the General Data Protection Regulation (Regulation EU, 2016/679) [ 60 ]. All data are collected in a pseudoanonymized form. Data are treated confidentially and used only by the collaborators in this study for scientific purposes related to what is stated in the research protocol. The retrospective study was approved by the IEO ethics committee (R1595/21-IEO 1704). The prospective study was approved by the IEO ethics committee (R1786/22-IEO 1907). | Results
The recruitment process started in May 2023 and is expected to be concluded on December 2023. Data analysis will be performed between 2023 and 2024. This project has been funded by a Pfizer grant: Enhancing therapy adherence among patients with metastatic breast cancer (65080791). | Discussion
Expected Findings
Adherence to anticancer treatment is fundamental in the clinical management of patients with MBC, as adherence can lead to a series of crucial clinical outcomes (eg, prolonged survival time, better monitoring and management of side effects, improvement of QoL) [ 1 - 6 ]. Studies on implementing machine learning to foster medication adherence as well as the development of shared DSSs are quite recent [ 23 ].
We believe that our study might achieve 3 milestones in the clinical care of patients with MBC with regard to adherence to OATs. Concerning the primary end point, implementing a shared and integrated DSS web-based solution might be an essential strategy for patients with MBC to enhance their health knowledge and understanding of medication nonadherence issues and to learn behavioral strategies to overcome individual, disease-related, and organizational barriers impacting adherence behaviors. This expected result is coherent with the results of other studies on patient decision tools, highlighting that such engines might improve patients’ participation and health knowledge in the care pathway [ 61 ]. Regarding the secondary end point, this prospective study will contribute to refining the initial risk-predictive model of adherence attained from the retrospective study, informing about specific (inner and external) predictors of adherence behavior during the MBC care pathway that were not considered in the preliminary version of the model. This should provide a more comprehensive and systematic definition of medication adherence for OATs in patients with MBC, defining an interrelated and evidence-based list of predictors. Data will define a final risk-predictive model of adherence and enable the identification of specific patient populations at risk of nonadherence. The earlier identification of patients with poor adherence using our machine learning web application will permit the development of tailored psychological and behavioral interventions to foster adherence by dealing with individual barriers and needs according to the risk profile. Finally, the information retrieved by the risk-predictive models might support oncologists’ treatment decisions, allowing them to better understand the hindered reasons behind a complex adherence trajectory.
Limitations of This Trial
Despite the key contributions of this study in the field of the medication adherence to OATs among patients with MBC, some limitations have to be acknowledged. First, the questionnaires used to refine our risk-predictive model might cause cognitive burden and general fatigue among participants. However, a monthly telephone interview has been introduced as a mitigation risk strategy. The interview is aimed to support participants in filling out the questionnaires and medication adherence diary and to manage barriers and concerns related to this study. Related to this aspect, the second limitation concerns the recall bias that might be generated using self-report measures. Further, self-report measures might overvalue adherence compared to the other prospective methods such as the Medication Event Monitoring System. However, the weekly medication diary method, in which patients have to report a detailed and systematic description of pill counts, side effects, and barriers related to the management of the therapy and missed doses, can aid in providing a more comprehensive picture of medication adherence. Moreover, as highlighted by Shah and colleagues [ 15 ], the diary has a higher accuracy and minor recall bias and permits the evaluation of subjective information (eg, emotions, thoughts, worries, behaviors, expectations) related to the medication coherently with the processual model of the adherence used in this research protocol [ 8 ]. Third, a monthly follow-up and a 3-month end point have been established, considering the type of diagnosis, high variation in the prognosis, and shifting in the lines of treatment due to cancer progression or declining physical functioning. This short end point might affect a deep understanding of the efficacy of our DSS in the long term. However, previous studies have used correspondent follow-ups and end points [ 13 ]. Fourth, as observed by other studies on patients with MBC [ 14 ], an additional risk that should be considered is related to the overestimated adherence rate. Patients participating in clinical trials are highly motivated, possibly leading to greater adherence to the OATs prescribed. Fifth, the initial version of the machine learning web application was not able to directly predict patient adherence. Instead, it predicted adherence risk factors, including short-term and long-term side effects, as well as physical status and comorbid conditions. This is a limitation of the data we used to build this initial version of the model, as it was not possible to reliably identify adherence patterns. However, predicting adherence risk factors can still provide insights that can assist in improving OAT adherence during shared decision-making sessions between physicians and patients.
Conclusions
Considering the poor evidence on OATs and the need to develop validated instruments to evaluate medication adherence to improve patient clinical outcomes [ 14 , 16 ], the development of integrated and shared DSSs able to foster adherence behaviors and to profile patient adherence risks might have key impacts on clinical practice and patient health-related QoL, thereby enabling the early identification of high-risk populations and enriching knowledge about the implementation of machine learning models in clinical practice. | Conclusions
Considering the poor evidence on OATs and the need to develop validated instruments to evaluate medication adherence to improve patient clinical outcomes [ 14 , 16 ], the development of integrated and shared DSSs able to foster adherence behaviors and to profile patient adherence risks might have key impacts on clinical practice and patient health-related QoL, thereby enabling the early identification of high-risk populations and enriching knowledge about the implementation of machine learning models in clinical practice. | Background
Adherence to oral anticancer treatments is critical in the disease trajectory of patients with breast cancer. Given the impact of nonadherence on clinical outcomes and the associated economic burden for the health care system, finding ways to increase treatment adherence is particularly relevant.
Objective
The primary end point is to evaluate the effectiveness of a decision support system (DSS) and a machine learning web application in promoting adherence to oral anticancer treatments among patients with metastatic breast cancer. The secondary end point is to collect a set of new physical, psychological, social, behavioral, and quality of life predictive variables that could be used to refine the preliminary version of the machine learning model to predict patients’ adherence behavior.
Methods
This prospective, randomized controlled study is nested in a large-scale international project named “Enhancing therapy adherence among metastatic breast cancer patients” (Pfizer 65080791), aimed to develop a predictive model of nonadherence and associated DSS and guidelines to foster patients’ engagement and therapy adherence. A web-based DSS named TREAT (treatment adherence support) was developed using a patient-driven approach, with 4 sections, that is, Section A: Metastatic Breast Cancer; Section B: Adherence to Cancer Therapies; Section C: Promoting Adherence; and Section D: My Adherence Diary. Moreover, a machine learning–based web application was developed to predict patients' risk factors of adherence to anticancer treatment, specifically pertaining to physical status and comorbid conditions, as well as short and long-term side effects. Overall, 100 patients consecutively admitted at the European Institute of Oncology (IEO) at the Division of Medical Senology will be enrolled; 50 patients with metastatic breast cancer will be exposed to the DSS and machine learning web application for 3 months (experimental group), and 50 patients will not be exposed to the intervention (control group). Each participant will fill a weekly medication diary and a set of standardized self-reports evaluating psychological and quality of life variables (Adherence Attitude Inventory, Beck Depression Inventory-II, Brief Pain Inventory, 13-item Sense of Coherence scale, Brief Italian version of Cancer Behavior Inventory, European Organization for Research and Treatment of Cancer Quality of Life 23-item Breast Cancer-specific Questionnaire, European Organization for Research and Treatment of Cancer Quality of Life Questionnaire, 8-item Morisky Medication Adherence Scale, State-Trait Anxiety Inventory forms I and II, Big Five Inventory, and visual analogue scales evaluating risk perception). The 3 assessment time points are T0 (baseline), T1 (1 month), T2 (2 months), and T3 (3 months). This study was approved by the IEO ethics committee (R1786/22-IEO 1907).
Results
The recruitment process started in May 2023 and is expected to conclude on December 2023.
Conclusions
The contribution of machine learning techniques through risk-predictive models integrated into DSS will enable medication adherence by patients with cancer.
Trial Registration
ClinicalTrials.gov NCT06161181; https://clinicaltrials.gov/study/NCT06161181
International Registered Report Identifier (IRRID)
DERR1-10.2196/48852 | The MMAS-8 Scale, content, name, and trademarks are protected by US copyright and trademark laws. Permission for use of the scale and its coding is required. A license agreement is available from MMAR, LLC., www.moriskyscale.com.
This project has been funded by a Pfizer grant: Enhancing therapy adherence among patients with metastatic breast cancer (65080791).
Abbreviations
Adherence Attitude Inventory
Beck Depression Inventory-II
Big Five Inventory
Brief Pain Inventory
Brief Italian version of Cancer Behavior Inventory
decision support system
European Organization for Research and Treatment of Cancer Quality of Life 23-item Breast Cancer-specific Questionnaire
European Organization for Research and Treatment of Cancer Quality of Life Questionnaire Core 30
European Institute of Oncology
metastatic breast cancer
8-item Morisky Medication Adherence Scale
oral anticancer treatment
quality of life
research electronic data capture
13-item Sense of Coherence scale
State-Trait Anxiety Inventory
treatment adherence support
Data Availability
Data will be available from the corresponding author on reasonable request. | CC BY | no | 2024-01-15 23:35:11 | JMIR Res Protoc. 2023 Dec 14; 12:e48852 | oa_package/ed/d3/PMC10755656.tar.gz |
|
PMC10759156 | 38055568 | Introduction
To successfully perform a polychromatic flow cytometry experiment, nothing is more important than correct compensation. Previously it was demonstrated that when there were no spectral differences between experimental samples and compensation controls, beads could be used to compensate cells ( 1–3 ). Maciorowski et al. ( 4 ) advised users to check single-color stained cells and single-color stained beads to identify which produces better compensation and use that for the rest of the experiment. However, this guidance is mostly qualitative, not quantitative, and not based on any comprehensive side-by-side comparison of beads and cells. This is further complicated by the large number of compensation bead options currently on the market.
Spillover in both full-spectrum and conventional platforms from the fluorochromes needs to be rectified using either unmixing (where the number of detectors is higher than the number of fluorochromes) or compensation (where the number of detectors is the same as the number of fluorochromes), respectively.
Unmixing/compensation (hereafter referred to as correction/corrected/correct) is often an issue with these experiments due to the disparity of effectiveness between beads and cells. In most cases, users have not performed all of the controls to address this issue. Furthermore, the published literature does not demonstrate side-by-side cell-based versus bead-based corrected data.
It is known that bead-based compensation sometimes does generate unexpected/wrong outcomes. Interestingly, recent studies involving high parameter panels used cells, not beads, to generate single stains ( 5–7 ). These studies used full-spectrum Aurora. It is becoming more evident that full-spectrum machines are more sensitive than photomultiplier tube–based conventional machines. It appears that cell-based single stains are gaining attraction for these full-spectrum machines. Of course, the use of cells is limited to the availability of sufficient numbers of cells to for single-color controls and experimental staining. Researchers have compared beads and cells for compensation in the past, leading to direct publication of the comparisons or using mixed cells/beads for compensation matrices (Refs. 8–12 and (Y. Shevchenko, I. Lurje, F. Tacke, and L. Hammerich, manuscript posted on bioRxiv, DOI: 10.1101/2023.06.14.544540 ), but all of these works are limited to one or two types of bead versus cells. Interestingly, many university flow cytometry core facilities and compensation bead vendor pages make mention that the compensation generated using beads is not always perfect. A CYTO (International Society for Advancement of Cytometry) poster presented in 2019 ( 13 ) and a poster from Friend et al. ( 14 ) at the 2023 American Chemical Society conference studied the same concept in a limited manner. Taken together, these findings suggest that the concept that there are differences between compensation beads and cells is a common issue, but the differences between the many different beads currently on the market have not been investigated, suggesting a general assumption that the differences between the bead sets are negligible. To investigate the merits of this likely assumption, we compared eight of the available beads on the market at the time of initiation with human peripheral blood cells, with all samples run in the same day, and data were compared systematically. To minimize additional caveats to the comparisons, the methods for comparison and panel design were not overly complicated.
These data suggest that experimental design should include additional time to address correction controls and what should be the appropriate controls for each experiment. At the same time with this work, we want to bring the attention of the wider audience to the topic that we need to spend resources and time to fix this issue. Highly experienced flow experts from both academia and industry need to investigate this problem closely and work together to solve it. If left unanswered, this phenomenon will ultimately cause massive data reproducibility problems, even with the honest efforts from the user’s side. | Materials and Methods
Cells
PBMCs from two healthy male adults were isolated from a leukocyte reduction system chamber purchased from the Oklahoma Blood Institute with Lymphoprep (STEMCELL Technologies). Isolated PBMCs were resuspended in serum-free cell freezing media (Bambanker, Bulldog Bio) and stored at −80°C for later use. Oklahoma Blood Institute obtained relevant informed consents for blood product use for research purposes. Because the chambers are a byproduct of platelet donation and do not cause additional risk to the donor, the East Carolina University Institutional Review Board does not require a board review of protocols. Protocols were approved by East Carolina University Institutional Biosafety Committee (protocol 01-19). All methods were carried out in accordance with relevant guidelines and regulations.
Ab capture bead or compensation bead
Eight different beads (details in Table I ) from Thermo Fisher Scientific (UltraComp, OneComp, AbC Total), BD Biosciences, Beckman Coulter (VersaComp), Miltenyi Biotec (MACS), Spherotech (COMPtrol), and Slingshot were used. Only the positive capture beads from BD Biosciences, Beckman Coulter, MACS, COMPtrol, and AbC Total were used for this work. Unlabeled positive capture beads (without Abs) were used as the negative control to ensure the same autofluorescence (AF) between Ab-bound and unbound beads. Slingshot, OneComp, and UltraComp provide only one vial with blank and positive beads. These three unstained beads were run as the negative control.
Ab–fluorochrome conjugate
All experiments to establish a median mismatch index (MMI) were done using a human CD4 evaluation kit (BD Biosciences, 566352) except for the Nova Yellow 690, which was purchased from Thermo Fisher Scientific (H001T03Y05). The following fluorochromes were used: BUV615, BUV661, and BUV737 (under 355-nm excitation); BV421, BV650, and BV711 (under 405-nm excitation); FITC and BB700 (under 488-nm excitation); PE-CF594, PE-Cy5, and Nova Yellow 690 (under 561-nm excitation); and allophycocyanin and Alexa Fluor 700 (under 640-nm excitation). DAPI, CD14 PE-Cy5 (BioLegend, 301863), CD3 allophycocyanin-R700 (BD Biosciences, 565120), CD4-allophycocyanin (BD Biosciences, 566915), CD8-BV786 (BioLegend, 344740), CD19 BB700 (BD Biosciences, 566397), and CD45RA-BV711 (BioLegend, 304137) were used to generate a small panel for testing the effect of compensation beads on biological interpretation in terms of the position of these populations on the plot.
Staining of PBMCs and beads
The following steps detail the staining procedures used. 1) Cryopreserved PBMCs were thawed in a 37°C water bath, shaking continuously until the suspension was almost thawed. When only a small amount of ice was left in the cryotube, the suspension was transferred to a 15-ml conical tube and 10 ml of 37°C media containing 2 U/ml endonuclease was added dropwise to the suspension. Following centrifugation at 400 × g for 10 min at room temperature, cells were washed with 10 ml of 37°C media, filtered gently through a 40-μm cell filter, and counted. 2) Cells (1 × 10 6 ) were dispersed into flow cytometry–appropriate tubes, pelleted again at 400 × g for 5 min at 4°C, and resuspended in 100 μl of staining buffer (BD Biosciences, 554656). In the case of beads, seven drops of beads were added in a FACS tube, followed by the addition of stain buffer to reach the volume of 1400 μl. Then, the beads were equally distributed in 14 tubes. 3) One hundred fifty and 500 ng of CD4 Abs were added to cells and beads, respectively (at saturation). 4) The samples were stained at 4°C for 30 min in the dark. 5) Cells/beads were washed twice in 3 ml of cold stain buffer, then 5 min of centrifugation at 400 × g (acceleration 9, deceleration 7). 6) Both cells and beads were resuspended in 200 μl of staining buffer and analyzed immediately on both cytometers.
Flow cytometers
Five-laser Cytek Aurora and four-laser BD FACSAria Fusion cell sorters were used for data generation. Details of the Aurora configuration can be found in Brandi et al. ( 5 ). Configuration details of the sorter are available on our Web site ( https://medicine.ecu.edu/flow-core/instruments-configuration/ ). The Aurora was configured to gain values recommended by Cytek assay settings. The Fusion was configured using photomultiplier tube voltages recommended by cytometer setup and tracking beads instructions provided by the manufacturer. Quality control assays recommended by the manufacturer were run before every experiment.
Software
SpectroFlo 2 and FACSDiva 8 were used for data acquisition. Conventional flow data were analyzed using FlowJo 8, and full-spectrum data were analyzed using SpectroFlo 2. Graphs were prepared using either Microsoft Excel or GraphPad Prism. FlowJo 8 was used to generate the t-distributed stochastic neighbor embedding (tSNE) plots. After each unmixing application, full stain cell data were exported, cleaned, downsampled to the same number, concatenated, and then opt-SNE was run.
Data collection
Ten thousand events as the main population in the scatter plot were collected for compensation bead analyses. At least 50,000 lymphocytes in the scatter plot were collected for cell analyses. A low flow rate was used for Aurora data collection, but a varying flow rate was used for Fusion collection. Data are available on FlowRepository (accession no. FR-FCM-Z6KC). Except for the seven-color panel for tSNE work, all data were repeated in Aurora and Fusion. We process data from both machines. The output from both machines is similar. All of the plots in this article were created using Aurora data only. Seven-color panel data were only acquired in Aurora. We ensured that the data in both machines were within the linearity range of each detector. The baseline cytometer setup and tracking beads report provides the maximum and minimum linearity of each detector and can confirm that our data are within this range. In the case of Aurora, manufacture recommended quality control does not provide this range; however, service engineers do check this range during preventive maintenance. We can confirm that our Aurora data are within the linearity range.
Dataset used
External data from optimized multicolor immunofluorescence panel articles were used for additional analyses ( 15–18 ).
Compensation/unmixing strategy
We choose to use single-stained samples to evaluate accuracy of compensation or unmixing as shown by Verwer ( 19 ) in his 2002 white paper. The “negative” cells or beads in the single-stain tubes were not used for calculation because they always have some nonspecifically bound Ab, which changes the fluorescence intensity level of the reference control, leading to erroneous calculations. This will violate the rule of same AF. Separate negative controls were used for compensation/unmixing calculations. Compensation matrices and unmixing were autocalculated in the software using bead or cell data and then applied to the cell or bead data. The median fluorescence intensities (MFIs) of the single-stained positive cells were measured and compared with the MFIs of the single-stained positive beads. When the beads had a lower MFI than the cells, the data were excluded for those fluorochromes from the compensation or unmixing calculations. SpectroFlo does allow AF removal, but we have not used this feature in this work.
Quantifying compensation/unmixing mismatch
An MMI (see Fig. 1 ) was used to quantify median mismatch. These are expressed as follows: MMI% = 100[(positive MFI y -axis − negative MFI y -axis )/negative RSD y -axis ], where RSD is the robust SD. The MMI is similar to the secondary stain index ( 20 ), but is modified and two times more robust. We made a classification of acceptable and unacceptable median mismatches. These values are classified based on deviation from the median of the negative population on the y -axis. Values between −125 and 125 are indicated in green (acceptable), which is a 25% deviation above and below the median; values between −175 to −125 and 125 and 175 are in orange (partly acceptable), which is a 25–75% deviation above and below the median; and values beyond −175 and 175 are in red (not acceptable), which is a deviation >75% above and below the median. Some examples can be found in Fig. 2 . At this time there is no quantitative method to describe the median mismatch. The rationale behind these categories or divisions is simple. A median mismatch within 25% of the RSD of the unstained cells (green) was considered acceptable based on their visual appearance, which shows no indication of a double-positive population in a single-stained sample. This insignificant amount of variation does not influence the outcome in a wrong way. In the orange category this deviation is within 25–75% of the unstained RSD. This amount of variation may slightly influence the outcome (as it appeared to be in slightly double-positive populations), but an expert can identify these easily and will not consider the population to be double-positive. Contrary to the red category where the median mismatch lies outside ±75% of the unstained cells, the RSD quite easily appears as a false double-positive. This amount of deviation can easily change the conclusion of the study. Hence, the red category needs to be avoided.
Generating the fluorescence emission profile
Cytek recommended steps were used to calculate all the spectra using the raw data.
Ab binding sites on beads compared with cells
Next, the approximate number of Ab binding sites on the beads compared with the binding sites present on cells was calculated, as this could potentially explain why some beads failed the bright and brighter rule. The ΔFMI cell = (MFI positive cell – MFI blank Cell ) and ΔFMI bead = (MFI positive bead – MFI blank bead ) were calculated. The number of Ab binding sites on the bead/Ab binding site on the cell is ΔFMI bead /ΔFMI cell . Table II shows that the number of binding sites can vary within one type of bead. It could be due to the size of the fluorochrome and/or the geometry of the bead surface that allows the Ab to bind. As the clone and Ab amount remain the same, the only variables are the fluorochromes. If the Ab binding to the bead is fluorochrome-independent, then one can expect the same number of binding sites on a specific bead type, which is not the case.
Effect of compensation beads on biological data interpretation
A small panel using DAPI (2 ng/ml), CD3 allophycocyanin-R700, CD19 BB700, CD4 allophycocyanin, CD8 BV786, CD14 PE-Cy5, and CD45RA BV711 Abs was designed. To unmix the fully stained cell sample, single stains from four different beads (UltraComp, OneComp, MACS, and BD Biosciences) and cells were made: 0.75 μl of each Ab was used for the full stain sample, and 2 μl of each Ab was used for all single-stain preparations. Data were analyzed after applying different unmixing matrices on the fully stained cell sample ( Supplemental Fig. 1 ). Fig. 3 shows the side-by-side comparison of tSNE plots of three repeats using these four beads. One dedicated unstained cell sample was used to unmix the DAPI signal in both cell-based and bead-based unmixing strategies. SpectroFlo software does allow multiple unstained samples as a reference, and we took full advantage of this feature. FACSDiva allows only one universal unstained sample as a reference. | Results
Identification of baseline variation of median mismatch
Single-cell samples were compensated/unmixed using the same cell samples ( Tables I , II ). The spillover matrix is assumed to be perfect, as the same data were used for unmixing calculations and as test samples. For each combination, MMI values were calculated ( Fig. 1 ). Examples of the data plots with MMI values denoting acceptable (green), partially acceptable (orange), and not acceptable (red) spillover are shown in Fig. 2 . Gates were placed at the center of the negative population, and MMI values were chosen based on plot visualizations and MMI numbers. These experiments were repeated three times for a full panel ( Fig. 3 ) and single color stains ( Fig. 4 ) with similar results.
Identification of median mismatch when using beads for correction
Next, unmixing was calculated using beads and then applied to the cell data. The first rule for compensation is that the single stains used should be equally bright or brighter than the experimental sample ( 21 ). COMPtrol and VersaComp beads consistently failed to show brighter signals than the cells (even though they were stained to saturation). As they failed to qualify with this rule, they were not used to correct cells. AbC Total and Slingshot beads appear brighter for some fluorochromes (such as FITC, PE-CF594, PE-Cy5, allophycocyanin, Alexa Fluor 700), but not all fluorochromes. These fluorochromes were only used to correct cells. BD Biosciences, OneComp, UltraComp, and MACS beads were brighter than cells for all fluorochromes in all three repeats. All fluorochromes of these four bead types were used to correct cells.
Similar to the above cell-on-cell data, MMIs for all combinations were calculated. As shown in Fig. 4 (rows 2–7), many unacceptable median mismatches (red) were observed. A high quantity of red was observed for all beads assessed. The patterns in the matrix were reproducible in all three repeats. Although the patterns are indeed reproducible, there were not perfect overlaps in the individual fluorophores between repeats, indicating some variability in the spectra between independent runs.
Comparison of spectra
First, comparisons between beads with the sample fluorochromes were assessed. As shown in Fig. 5A and 5B , emission profiles of FITC and PE-Cy5 are slightly different when comparing different beads. Further testing to determine whether these differences can be identified by the SpectroFlo software revealed that all FITC or PE-Cy5 spectra have similarity indices of 1; thus, for the software, they are identical. Next, assays were performed to determine whether these variations were intrinsic in nature or due to the beads themselves. FITC- and PE-Cy5–stained MACS beads were run five times on the same day and the plots were overlapped, demonstrating perfect overlap ( Fig. 5C , 5D ).
Next, the same fluorochromes staining the same donor cells but on different dates were assessed. Fig. 5E and 5F show that emission profiles (cells only) of FITC and PE-Cy5 are slightly different when comparing three different dates. Next, the quality of unmixing was further assessed. The first set of cell data (repeat 1) was unmixed by the cell data from the other two repeats (with the bright and brighter rule also verified). These data clearly reveal substantial unmixing errors, evident by the presence of significant unacceptable (red) MMI values ( Fig. 6 ). When combined, these data reveal that there are intrinsic differences between independent runs.
Baseline variation for beads
Next, the same bead single stains were unmixed using the exact same bead single stain samples, similarly to how cell-to-cell unmixing data were assessed above, to assess the amount of baseline variation present in the bead data. These assays generated matrices with considerable numbers of unacceptable (red) categories ( Fig. 7 ). OneComp and AbC Total have a substantial median mismatch, whereas MACS, UltraComp, and BD Biosciences beads have some problems. VersaComp and COMPtrol, which failed bright and brighter for cells, have fewer median mismatch problems in these analyses (this could be due to low MFI because MMI is not a brightness-independent parameter). Finally, Slingshot beads have similar mismatch patterns to MACS, UltraComp, and BD Biosciences beads but failed bright and brighter for some fluorochromes in Fig. 4 . These issues were completely unexpected, as these observations have not been previously reported in the literature. To delineate user error or instrument issues within our data, publicly available bead data ( 15–18 ) were assessed similarly. The data correction was calculated using the single stain bead data and applied to the same bead data. As the data were generated by different laboratories using different BD FACSymphony A5 cytometers, individual user error and instrument issues were minimized. BD Biosciences beads were used in three instances, whereas the fourth dataset used UltraComp beads. The data ( Supplemental Fig. 2 ) show that these observations are global phenomena.
All of the data so far in this study have been based on assessing single colors with a single abundantly expressed surface protein (CD4) on cells. Next, the differences in correcting using single-color cells or a selection of beads on a small panel of markers were assessed. Abs were used to identify B cells, T cells, monocytes, and CD45RA + cells. These differences are especially evident based on individual marker assessments, where population overcorrelation or undercorrection is evident ( Supplemental Fig. 1 ). Comparisons of cell-based tSNE assessments to one of four bead-based analyses indicate substantial differences in the population distribution variances between correcting using cells or beads ( Fig. 3 ). These data suggest that although unmixing/compensating flow cytometry data using cells is ideal, specific commercially available beads require additional considerations before use. | Discussion
To investigate how well compensation beads appropriate experimental cells, datasets with cells and eight commercially available Ab-capture compensation beads (beads from BioLegend and Cytek were not released at the time of this study) were generated using 13 commonly used fluorochromes. The literature has not reported any information indicating that these fluorochromes should not be used with beads for proper correction. In this study, we report direct side-by-side comparisons between commercially available beads and primary cells. The bead-based compensation and unmixing data reveal that not all fluorochromes are adequately corrected using beads, which could skew analyses inappropriately.
Choice of the fluorochrome
The fluorochromes used were specifically chosen to get a high amount of spillover, allowing these assessments. BUV615 and PE-CF594 were expected to have some spillover. BUV661, BV650, PE-Cy5, and allophycocyanin and BUV737, BV711, BB700, Nova Yellow 690, and Alexa Fluor 700 have similar emission maxima per group; hence, cross-laser bleedthrough is expected. FITC and BV421 were negative controls for spillover because they delivered the lowest amount of spillover into other detectors and received the lowest amount from other fluorochromes. Accordingly, FITC and BV421 were expected to always be in an acceptable (green) category, consistent with the data collected ( Fig. 4 ). Fig. 4 shows that some fluorochromes work properly for some detectors, but this depends on the bead brand. For example, the BUV615 detector works well for UltraComp beads, but the rest of the beads assessed had at least one fluorochrome that was not properly corrected for this detector. The PE-CF594 detector generally works well for all beads. The FITC detector worked every time for all bead brands, and the BV421 detector behaves almost similarly. This was unsurprising, as BV421 and FITC were chosen due to their limited spillover into other detectors.
In the choices of primary markers with those fluorochromes, we used only a few Abs, which are well-established primary markers (except CD45RA), but we still found different results between cell- and bead-based unmixing. An interesting result that was unexpected is that of the eight bead sets that were tested, two of them (COMPtrol and VersaComp) failed to elicit brighter signals than those signals on cells, leading to exclusion from this study. Two other bead sets (Slingshot and AbC Total) could only be used to correct five fluorochromes. In a high-parameter panel where researchers are trying to study the role of a new marker, a less defined marker, or a marker in a disease state or in an exploratory type of work, this variability could lead to many issues with interpretation of the data. It is highly likely that in these scenarios, researchers could be misled by the compensation bead-based data correction. This variability leading to differing results could also severely impact the repeatability and rigor of these studies.
In this study, the rules of compensation were followed as best as possible. To preserve the emission profiles of the fluorochromes, Abs were always kept and maintained in the dark and cold, and data were acquired as soon as possible. All beads and cells were stained with the exact same Ab vial to ensure that each assay used the sample fluorochrome without batch differences. Sufficient data were collected to get a statistically robust calculation. Another factor to consider is AF from the beads. AF within cell populations is known to elicit differing AF patterns based on cell types. BD Biosciences, AbC Total, MACS, COMPtrol, and VersaComp provided positive and blank beads in separate vials. To ensure that the AF of the universal negative control and the positive beads remain the same, only the positive bead was used for these cases, making the AF assessment more straightforward. OneComp, UltraComp, and Slingshot provide both beads in the same vials. The users need to ensure that both of these beads have the exact the same AF, the lack of which may introduce an additional source of error in the single stain control data, potentially changing the unmixing or compensation. UltraComp (lot no. 2306368) had this problem, so a new lot of beads were acquired for further experiments.
Users are recommended to check the AF for these beads beforehand. The data herein indicate that users should assess specific bead AF before staining single stain controls and performing flow cytometry experiments to ensure that their marker expressions are not miscalculated by AF mismatch.
Compensation control needs to be brighter than samples. All of the beads used in this study were tested beforehand to determine the Ab amount needed to be saturated for all of the different beads. This ensures that the assays will reach the maximum intensity of the beads. For each dataset, MFI was checked to confirm that the beads were brighter than the cell to fulfill the first rule of compensation/unmixing. This was especially important for COMPtrol and VersaComp beads, as data demonstrated that they generally produce dimmer signals than do cells for CD4. Because of this, they were precluded from further assessments. Of interest, AbC Total and Slingshot beads were brighter for some fluorochromes, but not others. As the Abs used to compare the beads to the cells were against a highly abundant protein (CD4), lower abundant proteins may allow these beads to produce brighter signals than those from the cells. Taken together, these data signify that bead choice is highly dependent on several factors, that is, the AF of the beads, the relative abundance of specific cell markers, and the binding sites on the beads.
What is an acceptable median mismatch? MMI is our choice for quantifying median mismatch. This parameter only measures the median mismatch between the blank and positive populations on the desired secondary detector in terms of the RSD of the blank.
Flow cytometrists have guesstimated the compensation accuracy for years using N × N plots, a well-accepted technique ( 4 , 6 ). There is no well-accepted parameter to quantify the degree of median mismatch. This same approach was used in the current study but with the MMI to quantify the median mismatch.
Although the general classifications used in this study are partly subjective, the data in Figs. 4 , 6 , and 7 and Supplemental Fig. 2 demonstrate well the visualization of the correction problems between cells and beads in combination with fluorochrome usage. Indeed, this is especially seen in assessing CD3 versus CD45RA ( Supplemental Fig. 1 ), where these fluorochromes are inadequately unmixed to different degrees by the beads assessed.
Additional experiments focused on the red categories, which are most easily identifiable and appear double-positive instead of single-positive. This is especially apparent in Fig. 2 , where the top right panel with an MMI of 1159 appears as a double-positive population when only one fluorochrome was used in that sample, clearly demonstrating the dangers in improper unmixing/compensation. Correcting the experimental cell data by the same cell data provides the baseline variations for this study because it is using the cells being experimentally analyzed for compensation. The median mismatch is rarely “0,” which was elegantly explained previously by Roederer ( 22 ). In all three repeats, the matrices were almost completely green ( Fig. 4 , row 1). Of all three repeats, out of 468 possible combinations, only one was red, indicating a good correction, which is expected from flow cytometry controls. Similar observations were found from BD Fusion. Interestingly, cell data could not be properly corrected using cell data acquired on different days ( Fig. 6 ), indicating variability in fluorochrome spectra from day to day. This also implies that 1) algorithms are working for both softwares accurately, 2) the gating strategy for correction was working well, 3) cell data can be successfully corrected by the same cells acquired on the same day, and 4) it is possible to correct data generated from these fluorochromes successfully. These results work as positive controls.
Previous studies have used bead-based compensation for cells, but the assumption has always been that the beads are adequately compensation cells. It is abundantly clear from Fig. 4 (rows 2–7) that bead-based correction produces suboptimal results when applied to cells, compared with cell-based controls applied to cells. This means that even with the best effort from researchers, the same experiments/conclusions will be difficult to repeat, which will only fuel the present reproducibility crisis ( 23–25 ). At the same time, improper correction can lead to identifying incorrect or biologically nonexistent populations ( 4 ).
The issues with bead-based compensation controls on multicolor stains are abundantly clear when comparing our data in Fig. 3 and Supplemental Fig. 1 . In this study, we tested four different bead-based unmixing matrices using the bead sets that passed the bright and brighter rule for compensation on one FCS file generated using a multicolor panel. All of the tSNE plots were different. There were several islands that were either missing or newly appeared in the bead-based tSNEs compared with the cell-based tSNE. Indeed, Fig. 3 and Supplemental Fig. 1 clearly show differences in the panels between cells and bead-based compensation. Analyses also represent improper correction leading to population loss and/or gain compared with cell-based correction. The incorrect populations can be easily identified because these markers have been extensively studied and are well characterized. This means that if five different scientists were running the exact same experiment using the exact same protocol, but each used a different bead set for single stain preparation, all five of them would end up with five different results. With improper correction, it is still possible to draw the correct regions/gates if fluorescence minus one controls are used for all fluorochromes ( 3 ), but that is not practical to implement. Of note, not every fluorochrome is suboptimal with every compensation bead set, suggesting that a combination of cells and one or more bead set could unmix/compensation data properly.
In general practice, when controls versus experimental samples are compared using tSNE, data are digitally combined and analyzed using these concatenated files before separating the tSNE plots for comparison. This approach was used to compare cell-based versus bead-based correction and revealed many differences between the two. These differences are only present due to the source of correction, suggesting that they are not real and are only artifacts. These findings indicate multiple parameters for which researchers need to account, including bead type and fluorochromes used when assessing gate placement.
Fig. 4 shows that some fluorochromes work properly for some detectors, but this depends on the bead brand. For example, the BUV615 detector works well for UltraComp beads, but the rest of the beads assessed had at least one fluorochrome that was not properly corrected for by this detector. The PE-CF594 detector generally works well for all beads. The BV421 detector works properly for all beads except AbC Total, and the FITC detector worked every time for all bead brands. This was unsurprising, as BV421 and FITC were chosen due to their limited spillover into other detectors.
As mentioned above, single-stained cells could not be correctly corrected using single-stained cells acquired on different days. This implies changes in emission profiles, at least with cells. Next, whether the emission spectra of the same fluorochrome appear differently depending on the bead was assessed. FITC (a primary fluorochrome) and PE-Cy5 (a well-known tandem) were used to assess these potential differences. Both fluorochromes showed slight visual differences in emission profile based on the beads ( Fig. 5A , 5B ). Based on the rules of compensation, these differences cause the median mismatch in cell data. Similarity index calculation showed that these variations cannot be identified by the SpectroFlo software. Based on the data in Fig. 5C and 5D , it is highly likely that the slight changes in spectra are due to the beads. The best assumption that can be made is that the material used in the bead preparation is somehow interfering with the fluorochrome emission. The mechanism(s) with the bead sets that lead to these disparities is a complex issue. A major impetus to further chemical analyses of the bead sets is associated with potential legal issues with proprietary bead compositions, although better understanding of the interactions of some fluorochromes and some bead sets would improve panel design for researchers.
Recently, Thermo Fisher Scientific warned their users that their UltraComp eBeads are incompatible with BV786 and SB780; they also recommended using cells over UltraComp eBeads for these fluorochromes ( 26 ). These data, both from this study and from Thermo Fisher Scientific, reflect that these issues are not unknown to the bead manufacturers. Recently another study compared two beads and found that one worked better than the other ( 12 ). Brummelman et al. ( 27 ) showed in their figure 1 that single stain cells and single stain beads behave differently upon compensation. Taken together, these data indicate that while unmixing/compensating flow cytometry data using cells are ideal, specific commercially available beads require additional considerations before use.
Beads failed to unmix themselves
Unlike Fig. 4 (row 1), where cells adequately corrected themselves, beads failed to do this, the extent to which is dependent on the bead type ( Fig. 7 ). COMPtrol beads performed the best with just a few red categories (this could be due to their low brightness). In contrast, AbC Total and OneComp beads performed the worst, with the maximum number of red categories. Based on the current knowledge and dogma, this phenomenon should not exist and (to our knowledge) has not been previously reported. There is currently no explanation in the literature for why it exists. Naturally, this raises the question of how or to what extent data correction using beads can be trusted when they cannot correct themselves. This phenomenon can be recapitulated using publicly available datasets run on different machines ( Supplemental Fig. 2 ). This is not an isolated issue due to technical errors, artifacts from sample preparation or the cytometer, or user error.
Lastly, the approximate number of Ab binding sites on beads compared with each Ab binding site on the cell was calculated ( Table II ). The numbers were expected to be similar among one type of bead, as the same clone and saturating amount of Ab were used for each stain. Interestingly, binding sites for NovaFluor dyes could not be calculated. For an unknown reason, the binding of this Ab to cells was extremely poor but worked normally with beads, and this pushed the numbers abnormally high; accordingly Nova data were not used for calculations in Table II . The experimental variation was estimated to be roughly within ±10% (acceptable, green), with 10–20% variation (yellow) being partly acceptable. For all bead types, there are some exceptions where the number of Ab binding sites stays outside this range, mainly for fluorochromes such as BV650, FITC, and Alexa Fluor 700. Notably, FITC showed a discrepancy, but this molecule has been used in flow cytometry since the inception of the technology. Regarding beads, VersaComp, COMPtrol, BD Biosciences, and MACS performed best with the lowest number of discrepancies. The performance of these beads indicates that the differences in fluorochromes mostly do not alter the binding ability of Abs. UltraComp, OneComp, and Slingshot have several numbers outside the ±10% cutoff, and AbC Total beads displayed the highest number of discrepancies. These observations suggest that the discrepancies are likely due to the bead shape/geometry or the material itself, which inhibits the Ab binding to the bead. In general, Alexa Fluor 700 has more binding sites for any bead type than any other fluorochrome. Perhaps the real Ab binding to the beads is influencing the fluorochrome to bind to the bead.
Additionally, the beads did not perform the same between machines. In the case of the analyzer, the samples allowed decent, steady events per second for all beads, but the sorter events per second varied. A flow rate of 5 was required for OneComp, UltraComp, and Slingshot beads to get an events per second close to 100; however, a flow rate of 2 was adequate for the other beads. Most likely, the materials used for the beads requiring a higher flow rate are heavy, whereas the rest are lighter. This is likely not happening in the analyzer because the sample line is small, and the beads travel shorter distances compared with the sorter, which has significantly longer sample tubing, requiring a stronger push to eliminate stalls over the longer sample line.
Finally, this study indicates three things that are important for flow cytometry users to know. First, our observations show that creating single-color stains using cells for correction is the best choice. Second, in many cases, where the use of cells is not an option, researchers need to mix and match cells and commercially available beads, until a better material option is developed. Third, we do have one recommendation for shared resource laboratory managers to mitigate this issue to some extent. Core facility managers should consider running single stains of all possible fluorochromes (CD4 kit or similar sources) using different beads, or the beads of choice for the users of the facilities. This database needs all samples to run on the same day. If scatter plots provide sufficient separation, multiple beads can be added in the same tube to reduce the sample number. Different beads can be identified digitally during analyses. At the same time, single color stains for cell types identified by the user base should be analyzed. Once a multicolor panel is finalized, the MMI matrix of cells can be generated. Using a script, s shared resource laboratory can identify which fluorochrome is suitable on which bead to get a MMI matrix as close as cell-based matrices. The script will use all possible combinations of fluorochromes and beads from the database to identify the best MMI matrix with the lowest number of red entries. | D.B. and M.L.R. conceptualized and designed the study, and D.B. and S.K.L. prepared the samples. All authors contributed to data analyses. All authors contributed equally to writing the manuscript.
Abstract
Compensation or unmixing is essential in analyzing multiparameter flow cytometry data. Errors in data correction, either by compensation or unmixing, can completely change the outcome or mislead the researchers. Owing to limited cell numbers, researchers often use synthetic beads to generate the required single stains for the necessary calculation. In this study, the capacity of synthetic beads to influence data correction is evaluated. Corrected data for human peripheral blood cells were generated using cell-based compensation from the same cells or bead-based compensation to identify differences between the methods. These data suggest that correction with beads on full-spectrum and conventional cytometers does not always follow the basic flow compensation/unmixing expectations and alters the data. Overall, the best approach for bead-based correction for an experiment is to evaluate which beads and fluorochromes are most accurately compensated/unmixed. | Supplementary Material | Acknowledgments
We thank Dr. Sumant Basu, Dr. Timothy Bushnell, Geza Paukovics, and Connie Porretta for providing insights. We also thank Anita Fauth and Frank van Diepen for invaluable suggestions.
Disclosures
The authors have no financial conflicts of interest. | CC BY | no | 2024-01-15 23:35:08 | Immunohorizons. 2023 Dec 6; 7(12):819-833 | oa_package/50/ad/PMC10759156.tar.gz |
|
PMC10762989 | 38166881 | Background
Sex workers, those who exchange sex for money or nonmonetary items [ 1 ], are disproportionally impacted by HIV [ 2 ]. Those engaging in sex work practice in a variety of settings, both indoors and outdoors. In the United States, sex workers are often a hidden, complex-to-access population [ 3 , 4 ] and thus underrepresented in HIV prevention research [ 5 ]. Due to the criminalized nature of sex work, few individuals self-identify as sex workers, making quantifying the actual number of sex workers in Chicago challenging. Arrest data is often used as a proxy to quantify sex work, and between 70,000–80,000 sex work-related arrests occur annually. However, it is estimated that in the United States, there exist between one million and two million women engaging in sex work [ 6 , 7 ].
The current legal framework in the U.S. is that sex work is illegal in all 50 states, minus a few specific jurisdictions within brothels in Nevada [ 8 ]. In Chicago, IL, sex work is not only criminalized, but prostitution-specific ordinances exist authorizing discretion to police officers about how to respond to someone believed to be engaging in sex work. Arrests, tickets, fines, and jail time are all potential outcomes for those Chicago police believe to be engaging in sex work. While Federally Qualified Health Centers exist throughout the city of Chicago, fear of criminalization, stigmatization, and discrimination serve as barriers to accessing care. Further, the hidden, illegal nature of sex work makes tailored HIV prevention efforts for sex workers challenging to implement. The same structural barriers to care access also contribute to a gap in HIV prevention research among sex workers because, until recently, sex workers have not been partners in research development [ 9 , 10 ]. While it may be argued that sex workers are indeed an over-researched population, especially in HIV research, their needs and priorities have not been adequately addressed [ 2 , 11 – 13 ]. Research is needed to facilitate the development of evidence-based interventions specifically tailored to their needs [ 5 ].
Community-empowered interventions, however, are evidence-based and have been a cornerstone for reducing HIV transmission among sex workers because community-empowered interventions are designed, implemented, and evaluated by the community served [ 9 , 14 ]. Centering Healthcare (Centering) is a community-empowered intervention that has been successful with other populations experiencing health inequities. Aimed at addressing the healthcare, learning, and community-building needs of pregnant patients, Centering originated as a group model of prenatal care but has since been successfully adapted for other populations, including those with sickle cell disease, diabetes, and as a method of postpartum HIV prevention [ 15 ]. As such, Centering may be well-suited for HIV prevention among sex workers as it has excellent potential to fit within current health systems, meet the HIV prevention needs of sex workers, and increase the probability of sustainable care [ 9 , 16 – 19 ]. Centering studies report positive outcomes, including increased condom use, fewer repeat pregnancies, lower pre-term birth risk, more knowledge and satisfaction, and more care visits [ 20 – 22 ].
Why community empowered interventions?
Sex workers are particularly vulnerable to HIV/STIs due to increased exposure to trauma, structural violence, and social barriers. These complex factors, in addition to high rates of intimate partner violence, may reduce autonomy over health-promoting behaviors such as consistent condoms, thus increasing vulnerability to HIV/STI [ 1 , 9 , 23 ]. In addition, barriers in the healthcare setting have resulted in sex workers being less likely to receive comprehensive healthcare, STI screening, treatment, and HIV prevention services due to structural barriers [ 23 ]. Historically, those with increased vulnerability to HIV, such as sex workers, have been excluded from the process of intervention development, thereby limiting the effectiveness of interventions. Community empowerment approaches may help to overcome these limitations as it is a collective process "whereby sex workers are empowered and supported to address the structural constraints to health and improve their access to services to reduce the risk of acquiring HIV" [ 9 ], p.172]. Community-empowered interventions aim to decrease vulnerability to STIs, including HIV, social, and structural barriers while increasing individual, financial, and community resources and social support [ 24 ]. Therefore, community-empowered, innovative approaches to preventing HIV among sex workers are needed [ 9 , 25 ].
Why Centering Healthcare for PrEP care?
PrEP is a medication that protects against HIV infection [ 26 ]. For those currently uninfected but at an increased risk, taking a daily pill is an effective HIV prevention method. It is an excellent option for sex workers that requires no partner negotiation, is user-controlled, and is cost-effective [ 26 , 27 ]. PrEP is less effective when not taken as prescribed. As such, innovative and sustainable ways to foster PrEP initiation and adherence among those with increased vulnerability to HIV remain a public health priority [ 1 , 26 , 27 ]. Adapting Centering to meet the PrEP care needs of sex workers in Chicago aims to bolster the efficiency of healthcare personnel while also enhancing the healthcare experience of the patients served. A preponderance of evidence suggests that Centering impacts outcomes for disadvantaged groups, including those from historically under-resourced communities struggling with sickle cell disease, diabetes, and interstitial cystitis, thereby highlighting its potential relevance for efficient and effective PrEP care for sex workers [ 15 , 28 , 29 ].
Why ADAPT-ITT?
The Assessment, Decision, Adaptation, Production, Topical experts-integration, Training, and Testing (ADAPT-ITT) model has been successfully used to tailor evidence-based interventions to meet the specific needs of communities with increased vulnerability to HIV [ 30 , 31 ]. The model includes eight phases, and, in alignment with community empowerment, leans on the guidance and leadership of community members and stakeholders to effectively adapt an existing model to meet a new community's HIV prevention needs. Rather than reinventing the wheel, through ADAPT-ITT, researchers can build on what has been previously proven effective, i.e., Centering Healthcare. Utilizing ADAPT-ITT for our study ensured community involvement from conception to dissemination. In this paper, we describe the use of the ADAPT-ITT model for adapting the Centering Healthcare intervention to meet the HIV prevention and PrEP needs of sex workers in Chicago [ 30 , 32 ]. The Centering model has been successfully adapted and used around the world. In alignment with the Getting to Zero 2030 initiative, this community-empowered Centering adaptation, focusing on HIV prevention among sex workers, has excellent potential to engage sex workers in consistent PrEP care by elevating the HIV prevention and community-building needs of this marginalized population. | Methods
We used the ADAPT-ITT framework to guide the adaptation process. Specific steps incorporated within each phase are described within the paper and summarized in Table 1 . Formative research took place in six stages between January 2019 and March 2022. Our team included researchers with expertise in Centering Healthcare, intervention development, adaptation and implementation, Health Equity research, and a community advisory board that included current and former sex workers, healthcare providers, social workers, and representatives from an FQHC in Chicago. A diverse representation of 13 authors contributed to this work, including six white cisgender women, 2 Black cisgender women, 2 Black cisgender men, 1 Black nonbinary person, 1 Arab cisgender woman, and one white cisgender man.
Additionally, six contributing authors identify as LGBQ + , and socioeconomic class spanned from working class to middle or upper middle class. Regarding potential biases, we recognized how our own experiences of racism, sexism, ageism, and the intersection of these identities may influence how we understand and interpret participants' experiences [ 33 ]. Therefore, we were careful not to make assumptions or draw conclusions about participants' experiences from prior work or based on our own experiences; however, as a few co-authors shared identities with the participants, we think our lived experiences strengthened the research process. To protect the privacy of additional authors, we will not disclose who is a current or former sex worker. However, it is essential to note that current and former sex workers were involved in every stage of the process. They served in leadership roles by running CAB meetings, co-developing the semi-structured interview guide, conducting interviews, and collaboratively analyzing the data and disseminating findings.
Recruitment
Participant recruitment focused primarily on adults engaged in sex work. For this study, sex workers are considered those who exchange sex for money or nonmonetary items [ 1 ]. Sex worker participants were considered eligible if they were over 18, traded sex for money or nonmonetary items within the last 12 months, spoke English, and were willing to participate in audio-recorded individual or focus group interviews addressing their HIV prevention and sexual health self–management practices. Later in the study, care providers were recruited to participate in focus groups. At various stages of the study, participants were either passively recruited through clinic-based flyers, social media (i.e., Twitter, Facebook, Instagram), and private community list-serves or actively recruited via word-of-mouth referrals. Whether a care provider or a sex worker, all potential participants emailed or called the study team to assess eligibility and learn more about the study; if eligible and interested, a remote individual or focus group visit was scheduled. Recruitment was done virtually, and interviews were done over Zoom due to the restriction of in-person events due to COVID-19 precautions. While we could not provide internet access to individual participants, some participants could access broadband through various networks, including public outside spaces where internet usage was free and accessible.
Ethical considerations
The institutional review board of University of Illinois Chicago and the FQHC approved all study procedures. Conducting research with participants who engage in sex work has unique ethical considerations, as this is a population that experiences criminalization and targeted policing. Facilitating confidentiality within individual interviews and focus group sessions was essential to protect those who trade sex and those who have multiple marginalized identities in addition to being a sex worker. The Zoom sessions, individual interviews, and focus group interviews were audio recorded. Participants were assigned a unique code number used only for this study on a digitally recorded file and transcript to protect participant anonymity. In addition, many participants chose not to turn their cameras on. De-identified audio recordings were transcribed by a professional transcription service, with incidental identifiers removed during transcription. Individual interview data were coded, stored on a password-protected computer, and encrypted to prevent access by unauthorized personnel; any identifiers, raw audio recordings, and contact information were destroyed after data collection. Focus group sessions provide opportunities for conversation and shared insight; however, they are less confidential than an individual interview. While focus group participants were encouraged to maintain confidentiality, confidentiality could not be assured post-focus group. To further minimize these risks, the researchers asked all focus group members to respect each other's privacy and confidentiality and not identify anyone in the group or repeat what was said during the group discussion.
Phase 1 (Assessment)
We conducted a community health assessment and regular discussions between the research team and the community partners, where we identified the need to develop an HIV prevention and PrEP navigation program.
Procedures
We held quarterly two-hour advisory board meetings with the FQHC leadership and FQHC-affiliated healthcare providers. We formed a 10-person Community Advisory Board (CAB) that was co-facilitated by a former sex worker and a respected community member and comprised of current and former sex workers, outreach workers, caseworkers, healthcare providers, researchers, and Centering experts. In addition, we conducted elicitation interviews ( n = 6) with Centering Healthcare experts to gain insight into this model of care [ 34 ]. To assess that care providers, staff, and building space would be adequate to suit the Centering project and to meet the cultural safety needs of the community, we conducted formative evaluations of the FQHC's existing resources. We also conducted one-on-one interviews and collected demographic surveys to assess the knowledge, attitudes, beliefs, perceived risk, barriers and social norms, self-efficacy, and intentions related to HIV prevention, HIV self-management, and HIV harm reduction with sex workers ( n = 36) in Chicago [ 14 , 35 , 36 ].
The inclusion criteria for the individual interviews with sex workers included the following: a) age 18 or older; b) exchanged oral, vaginal, or anal sex for something of value in the past 12 months; c) live in the Chicago area; d) speak and understand English; and e) be willing and able to provide informed consent to participate in an individual interview. Rapid content analysis was used to identify themes from all qualitative interviews, where PrEP emerged as a significant theme. All the sex workers interviewed were asked about PrEP, and many described both interest in and barriers to taking PrEP to prevent HIV. One participant stated, " Hey, if I'm participating in sex work and I have not contracted HIV, then I can take Prep, and it can help protect me against it... when you take that information and pass it around... it's helping someone " (36-year-old, Black, cisgender male). When asked about interest in PrEP, another participant acknowledged her challenge around accessing PrEP, stating, " I feel like it (PrEP) was inaccessible... but yeah, I think, ideally, I might use it " (47-year-old Latinx cisgender woman). In line with individual interview findings, focus groups and CAB members acknowledged the need for PrEP care to be community-empowered and accessible. Other healthcare-related themes (See Table 2 .) included seeking unbiased patient-centered care, peer-involved care, and community-building opportunities [ 14 , 36 ]. While initial CAB meetings were held in person, data collected after March 2020, including the individual interviews, occurred over Zoom due to the COVID-19 pandemic restrictions, limiting participation to those with internet access.
Phase 2 (Decision)
The CAB determined that Centering would be adapted from its original purpose as a group prenatal care model to meet the HIV prevention and PrEP care needs of sex workers in Chicago. The CAB's decision was based on the evaluation of the information gathered in phase 1 (Assessment) and previous evidence of the effectiveness of the Centering model [ 20 , 21 , 29 ].
The rationale for the choice of Centering
Centering is an evidence-based community-empowered model with demonstrated effectiveness for reducing health inequities in various patient populations and disease types [ 20 , 21 , 29 ]. It has since been modified and adapted to address the needs of diverse patient populations. Rather than a one-on-one visit, a cohort (or group) of 8–12 patients meet with the same providers at each visit for regular health assessments, linkages to services, and 75–90 min of interactive learning and skill-building that centers on patients' experiences. The Centering Healthcare model emphasizes social support and joint problem-solving through Health Assessment, Interactive Learning, and Community Building [ 21 ].
Centering has been shown to improve individual health through group engagement. The group process enhances learning, promotes healthy behavior change, builds a sense of control over health while developing a supportive network, and creates a collaborative provider–client relationship through continuity of care (Baldwin, 2006; Klima et al., 2009). Another strength is Centering's clear guidelines, which allow for program replication. Participants had a favorable response to the Centering Model. For example, when asked about adapting Centering to meet the PrEP care needs of sex workers, one participant acknowledged the benefit of peer support by saying, " Yeah, I think that'll be super helpful in engaging, to hear about tips that other people have...because it's really useful for people to kind of share and come together. If we're thinking about Centering, this is not an opportunity that maybe they have, especially if they don't know a lot of people who are exchanging sex or in sex work. It could be nice to have this outlet and just kind of engagement " (40-year-old, Black, transgender woman).
Phase 3 (Adaptation)
University researchers and community stakeholders collaborated with the FQHC to conceptualize how to adapt Centering for sex workers initiating PrEP.
Procedures
We conducted virtual sessions with the existing CAB, during which we reported the findings from the individual interviews conducted during the assessment phase. We discussed barriers and facilitators to accessing healthcare during and before the pandemic. We also presented a general overview of Centering and various activities implemented in different sessions. We then discussed how Centering could be adapted to address PrEP care and health promotion needs of sex workers in Chicago. CAB members agreed to retain the three core elements of the Centering model (Health assessment, Interactive Learning, and Community Building). Still, they suggested adapting certain activities to be responsive to sex worker culture regarding HIV prevention. For example, CAB members recommended revising language about body parts or creating opportunities to use humor for icebreakers. For more examples, refer to Tables 2 and 3 .
Phase 4 (Production)
This phase included adapting activities and materials needed for the targeted intervention in the three sessions.
Procedures
Our team of Centering experts, interventionists, and community liaisons incorporated themes from individual interviews and CAB feedback to develop the first draft of three 2-h sessions of an HIV prevention model of Centering for sex workers, which we named Centering PrEP (C-PrEP +). All three sessions of the C-PrEP + model include health assessments, interactive learning, and community building. Each session focuses on a theme and has corresponding activities. The theme for session one is an orientation to Centering & mindfulness. Discussion topics include C-PrEP +, PrEP-related knowledge, and health management practices. For example, a session one activity is called "HIV & PrEP: Word on the Street." The goal of this activity is to dispel myths about HIV and PrEP. Session two focuses on two topics: COVID-19 and coping mechanisms. Discussion topics include barriers to adequate health care, side effects of PrEP, substance use, individual and community impact of COVID-19, and identification of effective coping strategies. The theme for session three is harm reduction behaviors. This session aims to identify sexual behaviors that stand in the way of safety while working and improving behaviors to decrease HIV risk.
Some topics and activities will facilitate effective communication, negotiation, and safe sex practices. The corresponding activity for safe sex practices in the C-PrEP + model is called "Mental Checklist for Safety." Facilitators will guide participants to create a short, memorable checklist they can refer to when engaging in sex work. For more details about C-PrEP + content, see Appendix 1 and 2 . Through an iterative process facilitated by a former sex worker, we developed the first draft of C-PrEP + and the facilitator's guide. We adapted and created all the sessions' activities while maintaining Centering's core elements. We also finalized the selection of the collaborating FQHC as the site for formative pilot testing.
Phase 5 (Topical Experts)
The topical expert phase leans heavily on those with proficiency in the content being adapted (HIV prevention and PrEP care), know-how related to model adaptation (Centering), and knowledge about the population to be served by the model (sex workers). Topical experts are distinct from the CAB members and are asked to give feedback on the adaptation process and the materials used. Topical experts included two former sex workers, one centering healthcare interventionist, one social worker, one legal expert, and two sexual health FQHC clinicians. The topical expert phase presented the first draft of the C-PrEP + model to the identified experts described below. We then collected feedback to adapt this community-empowered model further. After modifying the model according to expert feedback, we then conducted focus groups with sex workers and care providers together to elicit targeted feedback.
Procedures
We developed quality assurance and process measures by identifying topical experts in HIV prevention, sex work, and Centering Healthcare. Distinct from study participants and community advisory board members, topical experts included sex workers as well as stakeholders from HIV-focused community organizations, public health experts, and centering experts. We collected feedback on the content, materials, and acceptability of C-PrEP + . For example, topical experts suggested incorporating meditation/reflective components into each session and providing opportunities for anonymous questions during the activities (see Table 3 ). The corresponding facilitator's guide and C-PrEP + model activities were adjusted based on the topical expert's feedback. Additional feedback elicited through semi-structured focus group meetings with sex workers and care providers further informed the C-PrEP + adaptation.
Focus group sessions with sex workers and care providers
Traditional ADAPT-ITT theater testing allows study participants (in this case, sex workers and care providers) to experience and respond to the proposed intervention before pilot testing. Due to COVID-19 restrictions, we were unable to conduct traditional theater testing. Instead, we conducted four focus group sessions where sex workers and care providers together were introduced to Centering content and corresponding activities to extract input on the adapted activities (see Tables 3 and 4 ). Sex workers provided feedback on activities and offered suggestions for targeted adaptation, while care providers offered recommendations related to PrEP care protocol and patient engagement. At the end of this phase, we developed the second draft of the model and the corresponding facilitator's guide.
Participants
The inclusion criteria for sex workers to participate in the focus group sessions included: a.) age 18 or older; b.) exchanged oral, vaginal, or anal sex for money or nonmonetary items in the past 12 months; c.) live in the Chicago area; d.) speak and understand English; and e.) consent to actively participate in an audio-recorded, two-hour focus group session. The inclusion criteria for providers to participate in the focus group sessions included: a) be an R.N., APRN, or physician who cares for those engaged in sex work; b) speak and understand English; and c) consent to actively participate in an audio-recorded, two-hour focus group session. Qualitative results from the focus groups highlight how input from participants contributed to model adaptation. For example, a 32-year-old Black female sex worker acknowledged how collaborative transformation made the model more relevant. "I just want to say that I really appreciate the iterative process of the work that you're doing. The going to the community, getting information, coming together, creating something, bringing it back to community, gain feedback, sort of over and over. I really appreciate that process.... Y'all hitting what's trending now ." Tables 3 and 4 expand on how focus group sessions informed the model adaptation.
Phase 6 (Integration)
Feedback from topical experts and findings from the sex worker and care provider focus groups informed the third draft of the model adaptation. We reviewed and adapted the second draft of C-PrEP + collaboratively with CAB members based on the analysis of the focus group sessions and integrated the findings to develop the third draft of the model and associated facilitator's guide. The following section describes the final adaptations made to the original activities.
Adaptations made to original activities
As shown in Table 4 , focus group and CAB feedback impacted how C-PrEP + was adapted. Role-plays that focused on condom negotiation were included in the initial two drafts of the C-PrEP + facilitator's guide and associated materials. Role-playing, the experience of negotiating with sex work clients, was an adaptation from the original Centering model, which utilizes opportunities for role-plays to facilitate interactive learning and skill-building. Role-plays around sexual coercion and associated negotiation were deemed culturally unsafe by topical experts and were ultimately removed. Instead, CAB members and topical experts did agree that providing opportunities to discuss circumstances that impact safety would be necessary for the health promotion and disease prevention of sex workers. Therefore, CAB and topical experts determined an activity called "Mental Checklist for Personal Safety" to be a culturally safe alternative to role-plays. This activity utilizes four different tables (each with a different theme, such as Communication and Negotiation, Physical Items and Mental Preparation, Safety, and Other-Participants' Choice). It aims to develop a personal checklist of four items/ideas to promote health and reduce harm when partaking in sexual activities. Here, participants would be separated into three groups. Each group would rotate to different tables where they could add their ideas to a sheet of paper, creating a brainstormed list. Participants will return to the circle once each group has rotated to each table. Each participant will be given a notepad and pen to write out their mental checklist during the discussion. Facilitators will start the conversation by acknowledging the items listed at each table (for example, possible items and ideas listed include communicating about condom usage, taking PrEP medication if applicable, having PPE and travel sanitizer on hand, and being prepared by learning self-defense or carrying pepper spray).
Another adaptation that C-PrEP + takes from the original Centering model is the style for starting each session. The Centering model opens sessions in the community and a circle, sometimes with three deep breaths and a chime. The CAB and topical experts agreed that deep breathing and a calming sound are practical tools, but there should also be a verbal opening about privacy and trust. Therefore, to align with culturally safe and community-empowered practices, participants will collaboratively create a mission statement about group privacy, trust, and confidentiality, which will be reaffirmed at the beginning of each session.
Based on study results and CAB input, new activities were added to Centering the development of C-PrEP + , which maintains the model's fidelity. Still, themes are responsive to the stated needs of the community (Appendix 2 ). The first of these themes surrounded the discussion of gender identity and body part naming. This diverse population of individuals utilize their bodies in the work they do and includes a significant number of transgender, gender fluid, gender non-conforming, and gender expansive individuals. To avoid triggering and stigmatizing language when discussing body parts, topical experts and the CAB agreed that it would be essential to have an activity related explicitly to body parts and genitalia naming. The aim will be to create a respectful environment for discussions about the physical body to ensure that the language is appropriate and respectful. The CAB members developed a genital name game that sex worker topic experts agreed was a fun and light way to address the need for respectful language. During this activity, participants will be asked to list (on a whiteboard) the names used to describe private parts. After as many names as possible are written on the board, the group will be asked to discuss names that could be triggering to the participants. While CAB members and self-identified sex workers believed that categorizing private parts names as "thumbs up" and "thumbs down" was fun, straight to the point, and engaging, healthcare provider topical experts were concerned that the thumbs up or thumbs down could be felt as stigmatizing or ostracizing. Responsively, both CAB and all topic experts agreed to remove the thumbs-up or thumbs-down component and determined that a structured discussion about agreed-upon private part names would be an essential activity to promote cultural safety within a group setting.
To close the sessions, the researchers, CAB, and topical experts aimed to be responsive to participants' desire for spiritual and meditative components as part of their care. For this reason, we have added a grounding stone for personal or shared intentions. This activity is intended to provide solidarity at the end of every session. The group will come together, hold a stone supplied by the research team, and say or think a positive affirmation or meditate with a positive intention for someone else to pick up and receive. These stones will touch and blend. Each participant will be invited to carry a stone throughout the week to symbolize positivity, empowerment, and solidarity. Participants will have three stones at the end of the three C-PrEP + sessions. | Discussion
This study filled a critical gap in Centering adaptation and in community-empowered research with sex workers considering Prep. While Centering has been successfully adapted, the model has never been iteratively adapted with community members and stakeholders, nor has it been adapted to meet the PrEP care needs of sex workers. The ADAPT-ITT framework offered structure in adapting the Centering model to suit the stated requirements of sex workers and care providers. Key insights, including spirituality and meditation elements, resulted in specific adaptions, such as using a grounding stone to emphasize intentionality. To address HIV disparities among sex workers, we directly addressed structural barriers to care like stigma and healthcare discrimination by incorporating a community-empowered intervention, C-PrEP + , that engaged sex workers throughout the intervention development process [ 25 , 37 ]. We developed a C-PrEP + facilitator's guide outlining facilitator expectations and detailing the goals and objectives for each of the three C-PrEP + sessions. This program has great potential for improving PrEP adherence, which has been shown to prevent 99% of sexually transmitted HIV infections [ 27 ]. To empower marginalized sex workers to have autonomy over their health in complex and often violent work environments, C-PrEP + has excellent potential to fit within current health systems, meet the HIV prevention needs of sex workers, and increase the probability of sustainable care [ 9 , 16 – 19 ]. Such an intervention integrates support that will, directly and indirectly, impact HIV/STI prevention, allowing the potential for sex workers to be empowered, see reductions in HIV/STI transmission, and improve other aspects of their health.
Limitations
The ADAPT-ITT framework has been proven valuable, and many studies have successfully employed this scientific framework; however, due to COVID-related challenges, the model was only partially adopted in the current study. Future studies should include all phases of the model. We have received funding to complete the training and testing phases. As this research was conducted during the COVID-19 pandemic, all study procedures were conducted over Zoom, limiting participation to those with internet access. This need for internet access may have resulted in a more stable, less structurally vulnerable population, which may not represent the broader Chicago SW community. Due to COVID-19 restrictions on in-person research, traditional theater testing was not implemented. Instead of theater testing, we presented the model by describing each session and each activity over Zoom through regular stakeholder meetings. These meetings allowed us to receive feedback about the model adaptation. Although the sample size of this study is small, the study findings show the reflection of diverse experiences among sex workers during the pandemic. However, input received from various stakeholder groups should be noted as a strength of the study. Future work should consider larger sample sizes and in-person theater testing to replicate the findings of this study.
Another limitation is that we have yet to pilot this adapted model. The following steps involve the final two phases of the study: training and testing. With funding from the National Institute for Nursing Research, we have plans to pilot this culturally adapted Centering model (C-PrEP +) at one FQHC to determine the feasibility and acceptability of addressing the HIV prevention and PrEP care needs of sex workers in Chicago. This piloting includes step-by-step training for C-PrEP + facilitators and lunch-and-learn activities for those who can recommend this group care model. Using an implementation science framework, we aim to continue our partnership with community members, Centering Health experts, researchers, and FQHC stakeholders to integrate C-PrEP + into the healthcare system. Despite the effectiveness of Centering [ 20 , 21 ], no prior studies have evaluated whether it is a feasible and acceptable model among sex workers for HIV prevention and PrEP care. | Conclusion
The primary goal of this study was to utilize the ADAPT-ITT model phases 1–6 to adapt Centering for sex workers desiring Pre-Exposure Prophylaxis (PrEP) for HIV prevention. Centering PrEP (C-PrEP +), a group model of PrEP care, was iteratively developed to provide an alternative to usual PrEP care for sex workers to increase community and individual empowerment and facilitate PrEP adherence through community support and skill-building. The iterative adaptive process to create the C-PrEP + model highlights the importance of implementing community empowerment approaches to improve health outcomes for sex workers. In addition to sex worker team leadership, scholars from numerous research institutions collaborated with a federally qualified health center (FQHC), community organizations, a community advisory board, and other stakeholders to ensure that the adaptation process was appropriately addressing the PrEP care needs of sex workers in Chicago. A tailored HIV prevention intervention, C-PrEP + aims to reduce HIV disparities among sex workers by focusing on community-empowered health promotion and PrEP adherence.
The adaptation process was overwhelmingly well-received by community members. CAB meetings often ended with members offering unsolicited gratitude for the consistent and participatory engagement. The ADAPT-ITT provides a framework for ensuring community engagement throughout the adaptation process [ 30 , 31 ]. We also know that community-empowered interventions have been the most successful at preventing HIV among sex workers [ 24 ]. Though Centering has been adapted to meet the needs of other populations [ 15 , 28 , 29 ], this model has never been adapted utilizing the ADAPT-ITT framework, nor has it been adapted to suit the HIV prevention and PrEP care needs of sex workers. Centering PrEP addresses a gap in HIV prevention care for sex workers by harnessing the power of the community and by developing a model that can be piloted and then replicated regionally, nationally, and globally.
This research highlights a critical approach to intervention development among one highly marginalized population. Working in partnership throughout the research process, from conception through dissemination, elevates community members' voices. The community members who participated in the model adaptation are excited about the launch of this program. We are hopeful that piloting will be successful, given those who informed the model are the same people researchers aim to serve. | Background
Sex workers, those who trade sex for monetary or nonmonetary items, experience high rates of HIV transmission but have not been adequately included in HIV prevention and Pre-Exposure Prophylaxis (PrEP) adherence program development research. Community-empowered (C.E.) approaches have been the most successful at reducing HIV transmission among sex workers. Centering Healthcare (Centering) is a C.E. model proven to improve health outcomes and reduce health disparities in other populations, such as pregnant women, people with diabetes, and sickle cell disease. However, no research exists to determine if Centering can be adapted to meet the unique HIV prevention needs of sex workers.
Objective
We aim to explain the process by which we collaboratively and iteratively adapted Centering to meet the HIV prevention and PrEP retention needs of sex workers.
Methods
We utilized the Assessment, Decision, Adaptation, Production, Topical Experts, Integration, Training, Testing (ADAPT-ITT) framework, a model for adapting evidence-based interventions. We applied phases one through six of the ADAPT-ITT framework (Assessment, Decision, Adaptation, Production, Topical Experts, Integration) to the design to address the distinct HIV prevention needs of sex workers in Chicago. Study outcomes corresponded to each phase of the ADAPT-ITT framework. Data used for adaptation emerged from collaborative stakeholder meetings, individual interviews ( n = 36) and focus groups ( n = 8) with current and former sex workers, and individual interviews with care providers ( n = 8). In collaboration with our community advisory board, we used a collaborative and iterative analytical process to co-produce a culturally adapted 3-session facilitator's guide for the Centering Pre-exposure Prophylaxis (C-PrEP +) group healthcare model.
Results
The ADAPT-ITT framework offered structure and facilitated this community-empowered innovative adaptation of Centering Healthcare. This process culminated with a facilitator's guide and associated materials ready for pilot testing.
Conclusions
In direct alignment with community empowerment, we followed the ADAPT-ITT framework, phases 1–6, to iteratively adapt Centering Healthcare to suit the stated HIV Prevention and PrEP care needs of sex workers in Chicago. The study represents the first time Centering has been adapted to suit the HIV prevention and PrEP care needs of sex workers. Addressing a gap in HIV prevention care for sex workers, Centering PrEP harnesses the power of community as it is an iteratively adapted model that can be piloted and replicated regionally, nationally, and internationally.
Supplementary Information
The online version contains supplementary material available at 10.1186/s12889-023-17508-4.
Keywords | Supplementary Information
| Acknowledgements
Authors would like to acknowledge our community partners, study participants, Southside Health Advocacy Resource Partnership, the community advisory board, and Howard Brown Health. For Baby T.
Author’s contributions
Authors RBS, JB, AKJ, JZ, NC, SA, DB, SS, CP, AKM wrote the manuscript text. Authors RBS, JB, NC, NG, JS, JN were involved in data collection and collective adaptation. Authors RBS, JB, JZ, NC, SA wrote the tables. All authors edited, revised and reviewed the manuscript.
Funding
Rita and Alex Hillman Foundation. NINR 5K23NR020445-02
Availability of data and materials
The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.
Declarations
Ethics approval and consent to participate
All methods were carried out in accordance with relevant guidelines and regulations. Human research approval was granted by the University of Illinois Chicago (UIC) Office for the Protection of Research Subjects institutional review board, reference number: 2019–1443.
All participants provided informed consent.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:35:10 | BMC Public Health. 2024 Jan 2; 24:56 | oa_package/d0/ed/PMC10762989.tar.gz |
|
PMC10765273 | 38179382 | Introduction and background
Imbalance of supply and demand
Many surgical procedures have been associated with increasing blood product transfusions in recent years. This problem in part emerged because of the rise in incidence of numerous comorbidities such as cirrhosis, drug therapies associated with arterial vascular disease, and increasing use of herbal products with anticoagulant effects. In this regard, as an example, spine surgery even if it is not indicated for chronic degenerative disease, is performed regularly in the presence of major trauma, including motor vehicle accidents, falls, and gunshot or knife violence [ 1 ]. The proportion of such emergencies has increased since the COVID-19 pandemic. Many of these patients are complicated with associated co-morbidities and regularly take medications and herbal products that can contribute to higher risks of intraoperative bleeding. Furthermore, before the pandemic, nonelective spine surgeries in patients experiencing substance use disorder resulted in a length of stay of 22 days, with 20% of patients leaving against medical advice [ 2 ]. Fusions comprised 71% of these, while the next most common indications were infection (53%) or trauma (27%). Of the total, 30% had complications, the most common being hardware failure (14%). After the pandemic began, the incidence of substance abuse increased and the chance of spine surgery becoming emergent doubled to 38% [ 3 ].
Regional differences in the availability of blood donors
Availability of platelet donations has drastically decreased, partly because COVID-19 induced financial constraints upon individual donors, blood donor centers, and hospitals paying to process the blood. In the USA, each apheresis platelet unit transfused results in a charge to their insurance of approximately $1000 in 2018, i.e., twice the acquisition cost, per Kaiser Foundation News or any general internet search. Generally, platelets can be donated every week, limited by volunteer time or money. Our university hospital’s platelet daily demand of about a dozen units originates from a single nonprofit organization covering a large region, which collects about 60 units per day. In contrast, a nearby wealthier state experiences no shortage, where the American Red Cross (ARC) reports their core small group (active on social media and with phone calls from the ARC Donor Center) donates 900 units per day. In some centers in that wealthier state, and also in ours in New Orleans, $50 to $100 is offered to each platelet donor after their four-hour visit to the blood donor center. Most volunteers in America are not paid for donating platelets and are instead motivated to return often because of the sense of community and altruism [ 4 ]. Before 2003, every six whole blood donors experienced their platelets being pooled into a six-pack, which was equivalent to the only currently widely available option for adults, an “apheresis platelet unit.” This transformational policy did not occur in Europe, accounting for their relatively plentiful amount of platelet donations. The equipment required to collect and administer one unit of apheresis platelets is very expensive and can only be performed in a large blood donor center, not in a mobile van. Most importantly, on top of the cost of platelets, the cost to an individual suffering a hematoma can be a life-long disability from neurologic damage (Table 1 ).
Preoperative risk stratification
The typical preoperative patient is increasingly likely to have a diagnosis of obesity defined by a body mass index over 30, with a prevalence approaching 45% in America. Obesity can cause liver cirrhosis, longer surgeries, and infections. Peripheral vascular disease also often coexists with obesity. The effects of commonly prescribed platelet inhibitors such as clopidogrel vary depending on patient genetics. During the psychological depression many experienced in the pandemic, comfort was sought in vitamin supplements, smoking, recreational drugs, over-eating, alcohol, or other forms of substance use disorder, all of which contribute to increased perioperative bleeding (Table 2 ).
An individual hospital should develop a triage tool, posting their algorithm on their walls specific to their needs. One physician facilitates buy-in from major clinical partners and then disseminates to end-users. Initially, our department led a quality improvement project at our institution with the primary aim of achieving a sustained increase in platelet donations in the community. Social media postings featuring people donating platelets once a week, word of mouth at the university, and posters in lounges within the hospital were considered. Later we realized the local blood donor center nonprofit has proprietary techniques and staff to drive volunteerism. Thus, we changed to a new aim. We sought to decrease the number of platelets transfused in our rural level-one trauma center. One way to prevent the need for platelet transfusion is to cancel or delay major surgery until the patient is optimized (Table 3 ). Another is to treat each identified disease while measuring the effects of medications that inhibit platelets (Table 1 ).
Last year, a review of over 138,000 papers was designed to formulate European practice guidelines. Two to eight weeks were required to correct anemia, including discriminating among the broad differential diagnoses. These researchers described a preference for intravenous iron if found deficient, instead of oral iron, definitely superior to last-minute simple transfusion of packed red blood cells (PRBC) that only masks the problem (Table 4 ) [ 5 ].
Consensus statements, including European guidelines, typically contain waiting times and antidotes readily available for certain procedures [ 5 - 7 ]. We describe locally developed guidelines when facing surgery with a high estimated blood loss.
Measurement of platelet function
Traditionally, a platelet count below 100 is considered insufficient for most invasive procedures [ 8 ]. The symptoms and signs from a history and physical provide clues to bleeding diathesis. For example, a history of consumption of large amounts of alcohol, morbid obesity, or the presence of Hepatitis C may indicate cirrhosis. A history of consumption within the last two weeks of Ginger, Ginko, or Garlic supplements can precipitate bleeding from platelet dysfunction [ 9 ]. Subtle issues of bleeding at home or changes in PT, International normalized ratio (INR), PTT, fibrinogen, mean corpuscular volume, or hemoglobin preoperatively may trigger consultation with a hematologist. Platelet function is assessed as part of the thromboelastogram (TEG). Alternatively, when platelet dysfunction is associated with a medication, consultation with a pharmacist and drawing a platelet function assay (PFA) can help differentiate the effect of a drug, rather than only basing the delay of surgery upon the last date of administration. For example, after Brilinta, cardiac surgeons often delay surgery based on PFA rather than the last administration date. Reconciliation of the last dose of a particular drug is especially critical post-pandemic because fewer than half of medications are taken as prescribed and patients have fewer resources to obtain costly medications or delay doctor visits.
When a TEG is available, it is a set of laboratory values that usually require an hour and approximately $500. If a patient is heparinized, the special cartridge to neutralize heparin is often not available. Interpretation is complex; hence, TEG is rarely used consistently during the intraoperative management of patients at our institution. Similarly, a platelet count is a laboratory value introducing about an hour delay, although indicated to be redrawn every hour until stable. In massive traumas, often attention is elsewhere and therefore no platelet count may be measured for about six hours, partly due to the transfer of care to different departments and physicians. Sometimes a TEG and the oft-forgotten fibrinogen or simple PT are not available until long after various massive transfusions. In those cases, strict adherence to the 1:1:1:1 ratio imitating fresh whole blood is imperative, but often forgotten.
Anesthesia departments administer more than half of the nation’s blood supply. Post-pandemic, there has been a shortage of platelets [ 10 ]. Hence, in this quality improvement process pathway, we specifically assessed the utility of TEG measurements compared to more frequently drawn platelet counts, fibrinogen, hemoglobin, calcium, and PT, PTT.
Intraoperatively, the TEG results may require between 5 minutes and 55 minutes to return. The interpretation is complicated, but the results are of immense value in helping differentiate which of the various components to transfuse, as proven in liver transplant or spine surgery. Haemonetics provides a “TEG University” website to train clinicians, and a company representative to contact in case of confusion [ 11 ].
To best guide the management of platelet distribution to patients, it is key to describe each component that causes efficient blood clotting. The most modern method used to assess the clotting process is a TEG. This test provides specific values that grant insight into each component from blood donors or blood substitutes in the form of drugs that need to be administered to improve coagulation. The TEG result is displayed as a time versus amplitude figure. First, the R (reaction) value exhibits the time taken for the first measurable clot to form. Normal R values range from 5 minutes to 10 minutes. R times longer than 10 minutes require treatment, such as fresh frozen plasma (FFP) 250mL or K-Centra. The next reading produced by a TEG is the K (kinetics) value, or early alpha angle reduction, which measures the time taken to reach clot strength. Haemonetics brand TEG machines report this value as the first component of maximal amplitude (MA). These K values factor in the activity of fibrinogen, with a normal range of one to three minutes, or MA normal range. A value exceeding this normal range signifies the need for fibrinogen supplementation, such as cryoprecipitate 5ml/kg or RiaSTAP. The final additional key value produced by a TEG machine is the late MA measurement, primarily reading the level of platelet activity. The normal MA range is 55mm to 73mm; patients with decreased MA could be appropriate candidates for platelet transfusion or drugs that improve platelet function.
Many complex spine surgery centers continue to forego TEG related to prohibitive expenses. One nonpharmaceutical method to improve platelet function is maintaining core (not skin, which is inaccurate during surgery) temperature above 36 degrees Celsius. Core temperatures should be measured in the nasopharynx, esophagus, bladder, or rectum.
Keeping the bed in 10 degrees reverse Trendelenburg may prevent postoperative posterior ischemic optic neuritis after some spine surgeries that have the propensity to cause blindness, especially after more than two liters of blood loss. Unfortunately, reverse Trendelenburg positioning may increase the gravity-dependent bleeding at the lumbar spine surgical site. Ionized calcium and pH should be frequently measured and maintained above 1.1 and 7.2, respectively, to prevent bleeding. Glucose should be maintained between 80 and 150 via a continuous regular insulin drip intravenously to prevent infection, which can increase blood loss.
Blood components
Current medical literature for K-Centra implies correction of platelets is important before administration of K-Centra, but the price for each is equivalent. However, the only feasible solution in the presence of platelet shortages is to wait to administer precious platelets until all other factors are corrected. Recommendations during massive transfusion protocols stemming from major trauma were summarized in consensus format in 2014 by the American College of Surgeons Trauma Quality Improvement Project. Their experts advocate for frequent redrawing of hematocrit, platelet count, PT, PTT, and fibrinogen count. These laboratory values must be redrawn about every 20 minutes in a typical non-bypass liver transplant or every hour until stable in major gunshot wound-type trauma. An alternative to these is a TEG, similarly drawn into a blue top test tube, with results available from 5 minutes to 60 minutes, similar to the time before PT, PTT, and fibrinogen are available, with similar clinical effectiveness, depending upon the model of the machine. The hematocrit and platelet counts must be drawn into small purple top test tubes, with results often available in 15 minutes.
Preoperatively, the type and cross specimen needs to be drawn into a very tall pink top test tube, at least an hour before surgery. A check specimen will need to be drawn into a small purple top test tube at least 15 minutes later to avoid clerical error, the most common fatal type of transfusion reaction occurring one in 10,000 transfusions. This check specimen is asked to be repeated at many hospitals if more than three days have elapsed, in case the patient has developed new antibodies. A type and screen are chosen instead if the likelihood of transfuse is low, because it only provides the type such as A, B, O, and Rh + or - and also the screen for very rare antibodies, but requires weeks to locate a suitable donor. All of these test tubes must be signed, timed, and dated by the staff drawing them.
The blood bank will not release a room temperature, agitated Apheresis platelet unit to the operating theatre until the telephone call requesting immediate transfusion. Some hospitals spend $1,300 to maintain an agitator table in one operating room to avoid clumping for the life of platelets (five days) instead of clumping within an hour while remaining allowed at any time to return the platelets to the blood bank. The other components may be stored for eight hours in a cooler and must be returned for new substitute ice packs and cooler beyond that marked period. Once the other components are removed from the cooler, they must be transfused to the patient or disposed of within four hours.
A Pall filter is an orange square with a 40-micron filter and is good for any blood transfusion. If a Pall filter is not available, the typical Y-piece squeeze cylinder contains a 180-micron filter, adequate for any blood component. Warming devices heat the blood to 41 degrees Celsius. Pediatric patients often need small volumes, so the filter is within a small Y-piece called a “Blood administration filter” and tubing connected to a stopcock and a 20mL syringe. If hyperkalemia is a concern, washing packed red blood cells will reduce the potassium but the unit must be transfused within 24 hours, similar to the 24-hour limit after FFP is thawed.
Only a few tertiary care centers have access to alternatives to apheresis platelets. Cold-stored platelets have been implemented in some centers, such as the Mayo Clinic. They are used with success for up to two weeks [ 12 ]. An alternative is low-titer group O whole blood, even at the time of expiration, as it may provide platelet activity when only the supernatant is provided by the blood bank [ 13 ].
The American College of Surgeons recommends 1:1:1:1 ratios of packed red blood cells, fresh frozen plasma, cryoprecipitate, and platelets. This translates to one packed red blood cell of 200mL, one FFP of 200mL, 5ml/kg cryoprecipitate (a 10- or 20-pack), and an apheresis platelet 200mL unit [ 8 ]. It is underappreciated that if the hematocrit is below 30%, platelets transfused may leak from small suture holes partly due to low viscosity.
A similar pattern of loss of transfused platelets via small holes may occur if PT and fibrinogen are not corrected. Therefore, the physician should next transfuse FFP until the INR is below 1.6. If FFP is not available, then a physician can request the “Massive Transfusion Protocol” cooler that contains 4 PRBC and 4 FFP already thawed (to avoid the 45 minutes of thawing time). Finally, one must transfuse 3ml/kg to 5ml/kg (usually a 10-pack from 10 whole blood donors or about 200mL) cryoprecipitate, which can require an hour to thaw and arrive in the cooler. The literature from post-cardiopulmonary bypass surgeries demonstrates an adequate fibrinogen threshold before the administration of platelets is 200 or even 250, not just 150, or else platelets may leak from holes [ 14 ]. European guidelines recommend fibrinogen remain above 2.5mg/dL (250) during massive hemorrhage [ 15 ].
Patient blood management algorithm education
Education must be repeated often and through various methods to ensure anesthesiologists and surgeons understand the appropriate methods of replenishing platelets intraoperatively [ 16 ]. A lecture with a pre-test and post-test followed by debriefing by individual anesthesiology faculty may help achieve this goal, especially where students rapidly turnover on rotations. We elected to place an easily accessible checklist on a 3x5 laminated card on the anesthesia workstation bluebell cart. This checklist contained the phone number for the blood bank, such that the question of what components are immediately available for this patient may be addressed. It also included the phone number of ancillary staff that aid in the transportation of tubes for arterial blood gas (ABG), CBC, PT, PTT, fibrinogen, or TEG from the operating theatre to the lab. In addition to the checklist, several ABG syringes, short and tall Purple tops (for CBC or type and cross, respectively), and Blue tops (for PT, PTT, fibrinogen, and TEG) must be available. Instructions for computerized ordering and labeling of the lab sample were included. The checklist reminds our staff to correct fibrinogen to 200, INR 1.7, and Hematocrit to 30% before platelet transfusion. It reminds us to redraw a blue and purple top every hour and order ABG, CBC, PT, PTT, and fibrinogen until stable. If bleeding is too rapid to wait for lab results, then the checklist reminds us to transfuse in the 1:1:1:1 which is now the 6:6:1:1 ratio due to six packs transforming into one unit of cryo and platelets. If a TEG is chosen, then the phone number of an expert to interpret findings is also on the card.
Supervision of junior staff must occur frequently by staff informed of the hospital-wide “patient blood management” algorithm [ 16 ]. This algorithm should include a flowchart and numerical steps guiding the frequency of lab draws of specific tests, how to call for help, and the physician in charge of triage at the blood bank when a shortage develops [ 10 ]. A meeting with the blood bank director reveals the structure of the committee utilized for triage [ 16 ]. An anesthesiology attending is an important guide to educate the team about specific components transfused. Complex cardiac surgery results in 30% more complications where handoffs are made to new anesthesia attendings in the middle of the case [ 17 ]. For example, if supervising multiple operations, platelets may be administered too little or too late, without regard for adjuvants. | Conclusions
In summary, for the operating suite, an anesthesiologist should emphasize the importance of a multidisciplinary approach. Continuing education, regular meetings with stakeholders to review protocols, and posters can all assist in reinforcing algorithms. | Platelet dysfunction and thrombocytopenia are associated with postoperative morbidity not only from modifiable preoperative factors but also from a lack of local patient blood management algorithms. In this regard, platelet transfusions have risen after the COVID-19 pandemic. Simultaneously, there has been a shortage of donors. It is logical, therefore, that each hospital should develop a triage tool, posting their algorithm on walls. Anesthesiologists should assist in planning a strategy to minimize blood transfusions while improving tissue oxygenation. A flowchart posted in each operating theatre may be customized per patient and hospital. Clinicians need reminders to draw a prothrombin time, fibrinogen, complete blood count every hour, and the appropriate threshold to transfuse. In summary, anesthesiologists are often unable to have a discussion with a patient until the preoperative day; thus, the onus falls on our surgical colleagues to reduce risk factors for coagulopathy or to delay surgery until after proper consultants have optimized a patient. The most important problems that an individual patient has ideally should be listed in a column where an anesthesiologist can write a timeline of key steps across a row, corresponding to each problem. If a handoff in the middle of the case is required, this handoff tool is superior to simply checking a box on an electronic medical record. In summary, in the operating suite, an anesthesiologist should emphasize the importance of a multidisciplinary approach. Continuing education, regular stakeholder meetings, and posters can assist in reinforcing algorithms in clinical practice. | Review
Common medications to treat coagulation-related bleeding
Epoetin Alpha
Epopoetin is often injected subcutaneously three times a week by chronic renal failure patients to obviate frequent PRBC transfusions at a similar cost of about $200 per PRBC. Prospective studies of orthopedic patients prove anemic patients treated with iron and epoetin boost their hematocrit such that all blood product transfusions including platelets are reduced [ 18 ]. If Epopoetin is begun four weeks before surgery, in combination with intravenous iron, with a target of hemoglobin 13 g/dL rather than 15 g/dL, then fewer doses of Epopoetin were required, yet transfusion incidence remained at 3% [ 19 ].
Factor 7a
Historically, recombinant factor 7a (Novo-7) was first administered to and saved the life of a solder [ 20 ]. Next, Novo-7 was used in hemophiliac patients undergoing dental extractions, although some experienced the adverse reaction of increased coronary thrombosis. The dose was 90mcg/kg, available in a powder to mix and use within the next two hours and repeat as indicated based on PT INR. Novo-7 has been used after cardiopulmonary bypass for the last decade because it avoids intravascular volume overload and inflammatory responses to foreign transfusions. Off-label use is recommended after usual measures fail during massive hemorrhage [ 8 ]. However, thrombotic complications may outweigh the benefit even in pediatric cardiac surgery [ 21 ].
Tranexamic Acid
Prior to the availability of Novo-7, amino-caproic acid was the mainstay during cardiopulmonary bypass, and its use then spread to multi-level spine surgery [ 22 ]. It was not very effective and brought some concerns about renal failure. Tranexamic acid (TXA) has replaced amino-caproic acid, although limited by dose-related seizures. The dose of tranexamic acid has not been optimized in spine surgery, but a trial called “OPTIMIZE” in cardiac surgery published a linear correlation between reduction in red cells transfused and TXA dose, from 2mg/kg/hr to the high-end dose of 16mg/kg/hr [ 23 ]. It has been studied in a meta-analysis of 20 randomized controlled trials, which found reduced blood loss by about 300mL; just 1g load can improve surgical site visualization and PT and INR values during orthopedic surgery. Tranexamic acid, desmopressin acetate (DDAVP), and other drugs reduce blood loss in spine surgery [ 24 ]. During hemorrhage, the dose of tranexamic acid is 1g over 10 minutes, followed by another 1g over infusion over eight hours [ 8 ]. Aprotinin was effective, but now its use is limited to Europe due to some concerns about renalilure [ 25 ].
Desmopressin Acetate
After cardiopulmonary bypass, if a patient is on hemodialysis chronically with a high BUN, the platelets will be dysfunctional from uremia. In these situations, often DDAVP 16mcg in 100mL normal saline is administered over 30 minutes to improve platelet function caused by bypass or other medications [ 26 ]. If antiplatelet drugs or Von Willebrand’s Disease (present in approximately 1% of patients) are the culprits, an expert opinion article in JAMA suggested administration of 0.3mcg/kg of DDAVP [ 8 ]. When Plavix is the cause of platelet dysfunction, even as many as five apheresis transfusions may reverse only half of the coagulopathy. Therefore, the consensus statement describes using DDAVP, TXA, and Novo-7, because these all improved the PFA.
Thrombopoietin
If cirrhosis is suspected, consider Thrombopoietin, given at least 10 days before surgery. Vitamin K 5mg intravenously a few hours before surgery, or if the situation becomes urgent, 4-factor prothrombin complex concentrate K-Centra may be administered [ 27 ]. The dose of thrombopoietin (romiplostim) was defined at 3mcg/kg per week for two weeks because a platelet count rose above 100 x 109 in 79% of patients [ 28 ]. Median platelet counts improved from 47 to 164 at the time of surgery (p < 0.0001). Blood transfusion is unfavorable in cirrhosis during ascites and fluid overload. Caution must be exercised in correcting PT INR in liver failure as it does not track coagulopathy. Instead, during surgery, a TEG should be considered if coagulopathic [ 29 ]. Naik found fewer FFP but more cryoprecipitate (and no change in platelets) were transfused if monitoring TEG.
In cirrhosis, there is enormous variability in the target of preoperative platelet counts [ 30 ]. In multilevel spine surgery, for example, regardless of the etiology of thrombocytopenia, Chow et all found in 981 patients an odds ratio of 4.88 of PRBC, FFP, or platelet transfusions if platelet count was <100 preoperatively. If between 100 and 150, the odds ratio was two. An ASA score over 3 was associated with a 2.4 higher odds of requiring transfusions [ 31 ]. If instead, Idiopathic Thrombocytopenic Purpura (ITP) is the cause, then researchers showed oral daily eltrombopag 50mg from three weeks preoperatively to one week postoperatively, or intravenous immunoglobulin 1g/kg a week before surgery may help, although thrombosis is a risk, especially major cardiac or pulmonary embolism [ 32 ].
K-Centra
Because FFP is the most common cause of transfusion-related acute lung injury (TRALI), substitutes such as K-Centra are favored [ 8 ]. K-Centra is available as a powder similar to Novo-7 and Ria-Stapp. K-Centra at 1ml/kg (or if INR is 2-4, 25 IU/kg) may be administered over about ten minutes to correct PT [ 33 ]. When coumadin is the cause, K-Centra at 60 IU/kg may be used if INR is over 6, or if the patient was taking apixaban, edoxaban, or rivaroxaban which are direct Xa inhibitors and Xa levels can be followed [ 8 ].
Ria-STAP
An oft-forgotten blood component is cryoprecipitate. Each whole blood donation generates about 15mL of cryoprecipitate. A brand-name drug to replace fibrinogen is Ria-STAP (CSL Behring) [ 34 ]. Ria-STAP is a powder reconstituted to 5ml/min to reach a 50mg/kg total dose [ 35 ].
Monitoring considerations
The anesthesiologist may place an ultrasound-guided arterial line, central line, and peripheral line with a Biopatch to reduce risks of infection or trauma to the vessel. A simple 20g antecubital peripheral cannula can be changed with the Seldinger technique to a 7 French shorter secure IV. Injection ports contain bacterial contamination in over 30% so adequate hand gel and port scrubbing with alcohol must be maintained [ 36 - 37 ].
Central lines often are chosen because central venous pressure guides total fluid administration, but the number is only somewhat helpful as a trend. Newer monitors perform better at estimating preload status. Hypervolemia-induced venous hypertension coagulopathy can be prevented by a central venous pressure maintained lower than 10mmHg, yet high enough to help maintain urine output. Preoperative hypertension should be treated so that wide shifts of blood pressure do not contribute to bleeding or spine or cardiac ischemia (heralded by 1mm ST depression 80 after the QRS in leads II and/or V5). Mean arterial pressure (MAP) should be kept at 65mmHg, or if neuromonitoring indicates compromise, then a target mean pressure of 85mmHg is often encouraged. Since the advent of the Edwards Flo-Trak connected to a radial arterial catheter, the cardiac output can be continuously monitored to deduce the need for vasopressors to raise vascular resistance or the pulse oximetry PPV or HPI can predict the need for fluids. Some pulse oximeters provide continuous hemoglobin measurements. Point of care ultrasound of the short axis left ventricle can allow direct tracing of the end-diastolic area to estimate preload adequate beyond 6cm/m2 BSA.
If a spine surgery or interventional pain patient complains of back pain postoperatively, for example, an emergency MRI may be warranted to rule out epidural/spinal hematoma. Re-exploration to evacuate the hematoma is usually required within six hours to prevent permanent paraplegia. Pain in the back can also indicate an epidural abscess, similarly, requiring immediate MRI scan and surgical decompression.
Continuous monitoring of MAP must continue in these cases in the intensive care unit where nurses perform neuro-checks every hour until stable then reduce to every four hours on the ward. Systolic blood pressure must be kept below 140mmHg to prevent hematoma, yet MAP must remain above 65mmHg in most patients or above 85mmHg when signs of ischemia or increased ICP are present (such as during instrumentation or pressure from surgical instruments).
Anesthesiologists are capable of placing a lumbar drain, a 17g Tuohy needle with a 19g catheter into the CSF, as a typical epidural kit contains. This kit is just 1g larger to avoid occlusion from thick CSF. During endovascular stent or open thoracic aortic aneurysm repair, or transsphenoidal pituitary tumor resection, anesthesiologists place these lumbar drains and leave them in place for about three days postoperatively. The CSF is drained at 10cmH20 level but not more than 10ml/hour to avoid herniation. During movement from a bed to a gurney or any position changes, the drain stopcock must be turned off toward the patient to avoid sudden changes in ICP or herniation.
Neuromonitoring technicians communicate via telemedicine in real time to interpret the somatosensory evoked potentials and motor-evoked potential findings with a Ph.D. and report clinical problems to the anesthesiologist and surgeon. The anesthesiologist should maintain the best practices to improve the signal by avoiding more than half a MAC of vapor anesthetic and discussing preoperatively with the technologist. Maintaining adequate cerebral and spinal blood flow by ensuring adequate MAP is essential, appreciating the shift to the right of the autoregulation curve in hypertensive patients. The anesthesiologist may also add monitoring such as near-infrared spectroscopy (NIRS) available on the Flo-Trak called “Fore-Sight” and from other vendors. A sticker similar to a pulse oximeter is placed on the right and on the left forehead to allow titration of hemoglobin, cardiac output, or other factors that relate to mixed venous oxygen saturation. If the NIRS is below 75% or drops more than 20% from baseline, sometimes a transfusion of PRBC is indicated, or drugs to increase cardiac output.
From preoperative consultation with an anesthesiologist, intraoperative algorithms of patient blood management, and finally throughout postoperative monitoring until a patient (hopefully) ambulates home, anesthesiologists assist surgeons in planning a strategy to minimize blood transfusions while improving tissue oxygenation. A flowchart posted in each operating theatre may be customized per patient and hospital. Clinicians need reminders to draw a prothrombin time, fibrinogen, complete cell count every hour, and the threshold to transfuse. Anesthesiologists are often unable to discuss with the patient until the preoperative day; therefore, the onus falls on the surgeon to reduce risk factors for coagulopathy or to delay surgery until after proper consultants have optimized a patient.
A triage tool was developed in Europe during the COVID-19 pandemic that included severe burns with elderly, cardiac arrest where etiology is irreversible, advanced cognitive impairment, advanced neuromuscular disease, metastatic disease with expected survival less than six months, advanced immunocompromise, NYHA Class III or IV heart failure or COPD with FEV1 under 25% predicted, trauma with significant brain injury, ruptured aortic bleeding, ECMO, or other indicators of mortality expected over 80% [ 38 ]. Famously, one patient may receive 200 units of blood over the course of just three days, yet still expire. It is in difficult situations such as these that ethical implications regarding blood product availability for the entire community must be considered.
The most important problems that an individual patient has must be listed, in a column the anesthesiologist can write a timeline of key steps across a row, corresponding to each problem down the first column. If a handoff in the middle of the case is required, this handoff tool including coagulation is superior to simply checking a box on an electronic medical record (Table 5 ). For the development of an individual patient plan, the risk must be graded as low, medium, or high for both thromboembolic complications weighed against either low, medium, or high risk of hemorrhagic complications [ 39 ].
Bridge therapies with easily reversible IV heparin or low molecular weight heparin are good options when the risk of stroke or heart attack is high. Despite the statistically higher risk of heart attack compared to bleeding, many proceduralists discontinue Plavix and aspirin, contributing to a three-fold increase in major adverse cardiac events [ 39 ]. Plavix may cause blood loss to rise 50% without postoperative morbidity except in intracranial surgery. Consultation with cardiologists, neurologists, and hematologists is recommended, led by the anesthesiologist, who must document a detailed plan including immediate availability of platelets or antidote medications, along with a handwritten informed consent.
The World Health Organization recognized patient blood management as necessary as early as 2010, including details such as keeping ionized calcium above 1.1mmol/L and pH above 7.2 [ 16 ]. | The authors wish to acknowledge the Paolo Procacci Foundation for its generous support in the publication process | CC BY | no | 2024-01-15 23:35:11 | Cureus.; 15(12):e49986 | oa_package/b9/c6/PMC10765273.tar.gz |
|||
PMC10765565 | 38179342 | Introduction and background
Leprosy is a contagious infection that is caused by Mycobacterium leprae . The disease causes damage to the affected area by targeting the peripheral nerves, which results in swelling of the affected area. The infection commonly targets the nerves, eyes, skin, and mucosal lining. Thus, the affected area will lose the ability to be sensitive to pain and touch, putting the patient at risk for injuries such as cuts and burns, which can lead to infection [ 1 - 5 ].
M. leprae is a pathogen that has adapted to a specific environment. Mycobacterium leprae is an intracellular organism that targets nerves and results in the clinical symptoms of leprosy. It is weakly acid-fast and has undergone significant genome reduction, leaving it with the smallest genome among mycobacteria and many non-functional pseudogenes. Related to the non-functional pseudogenes, it is challenging to culture the organism in a laboratory [ 1 , 4 , 6 , 7 ]. Through its evolution, M. Leprae has learned how to evade the host's immune system, thus increasing its chance of survival. Using the phenolic glycolipid I (PGL-1), a surface lipid of M. Leprae 's cell wall, M. Leprae can defend itself against oxidative killing. Plus, M. leprae can survive and multiply within macrophages, allowing it to escape the host immune system; thus, M. Leprae can stay dormant inside the host system for a long time until the symptoms appear. M. Leprae prefers cooler temperatures and is typically found in lower-temperature areas of the skin. Its viability decreases rapidly above 35°C, and because of this, most animals cannot be infected with M. Leprae as they clear the bacteria quickly. The only animal reliably developing leprosy with neurological involvement similar to humans is the nine-banded armadillo, the only natural host of M. Leprae other than humans. Once M. Leprae infects an area, the skin area will typically change color, becoming lighter or darker, often becoming dry and flaky. The affected area can lose feeling or even become red due to inflammation [ 1 , 8 , 9 ]. Leprosy is a significant global health concern, but despite the stigma, it is not as highly contagious as commonly believed. Effective treatments are available, but early diagnosis is crucial to prevent irreversible disability in the eyes, hands, and feet due to neuropathy. Lifelong care may be necessary for these disabilities. This article reviews leprosy's epidemiology, microbiology, clinical manifestations, diagnosis, and discussions regarding issues related to treatment. | Conclusions
Leprosy is a contagious infection caused by Mycobacterium leprae and Mycobacterium lepromatosis . Leprosy is a nonfatal infectious disease that is the most common cause of non-traumatic peripheral neuropathy worldwide. It is estimated that 250,000 people contract leprosy every year. Areas with the highest transmission rates are Brazil, Indonesia, and India. However, international travel is so prevalent that the infection is not isolated to these areas. In the United States, 150 to 250 new cases are reported yearly. Transmission of the infection is not fully understood; however, the two proposed transmission routes are aerosol droplets and broken skin-to-skin contact. It is postulated that extensive dissemination within the host can occur once bacteria travel and infect the upper respiratory tract. The clinical manifestations of leprosy depend upon the body’s cell-mediated immune response. After several years of incubation, leprosy presents slowly with a various spectrum of disease. Tuberculoid and lepromatous forms of leprosy are the most prevalent subtypes. A common symptom of fatigue and fever may be present in both cases. Leprosy is characterized primarily by skin lesions, hypoesthesia, and peripheral neuropathy. Early physical exam findings include hypopigmented or reddish skin patches, diminished sensation in involved areas, paresthesia, painless wounds, and tender, enlarged peripheral nerves. The infection causes damage by targeting the peripheral nerves, which results in swelling of the affected area. The disease commonly targets the nerve eyes, skin, and mucosal lining. The loss of eyelashes and eyebrows and the thickening and enlarging of the nose, ears, facial skin, and cheeks contribute to the typical leonine facial appearance.
In summary, leprosy is a significant global health concern, but despite the stigma, it is not as highly contagious as commonly believed. Effective antimicrobial treatments are available for leprosy; however, due to the severe and lasting complications, early recognition and treatment are crucial to prevent irreversible disabilities of the eyes, hands, and feet. This literature review provides some knowledge on the epidemiology, microbiology, clinical manifestations, diagnosis, and treatment of leprosy, which can and should be used by healthcare professionals to diagnose, treat, and prevent the spread of disease and long-term disability. | Hansen disease, known as Leprosy, is an infectious disease caused by Mycobacterium leprae . The disease was once thought to be highly contiguous, and patients with leprosy were treated poorly and had to face discrimination due to the gruesome disease’s complications. Mycobacterium leprae , the bacterium causative of leprosy, can generally be found in the nine-banded armadillo. The bacterium is transmitted via aerosol droplets and broken skin-to-skin contact. Once M . leprae enters the body, it will target peripheral nerves and the lining mucosa of the skin and eyes, thus causing inflammation and tenderness of the affected area. Over time, this will lead to peripheral neuropathy and weakness of the affected body parts. Treatment of leprosy involves multi-drug combinations such as dapsone, rifampin, and clofazimine. Even though leprosy is curable, early detection and treatment are crucial to preventing irreversible damage and disabilities. Prevention measures include early detection, treatment regimen adherence, close contact prophylaxis, contact tracing, and community awareness. This review aims to provide the latest diagnostic and therapeutic recommendations for leprosy. It outlines the epidemiology, microbiology, clinical treatment, and immunological methods used to detect leprosy. | Review
Methods
This is a narrative review. The sources for this review are as follows: searching on PubMed, Google Scholar, Medline, and ScienceDirect using keywords: Leprosy, Blindness, Mycobacterium leprae, Armadillo, and Immunologic reactions. Sources were accessed between August 2023 and November 2023.
Epidemiology
Over the decades, Mycobacterium leprosy has been feared as a highly transmissible, life-debilitating disease. Current literature tells us it is difficult to spread and can be treated easily. Yearly incidence rates, per data reported globally to the World Health Organization in 2019, propose that approximately 150 people in the United States and 250,000 worldwide contract Leprosy. Children comprise 15,000 diagnosed cases within this significant number [ 1 , 10 - 14 ].
In the United States, 150 to 250 cases are reported yearly. Most occur in those who live in regions where the disease is still common. States reporting the highest number of new cases are Arkansas, California, Florida, Hawaii, Louisiana, New York, and Texas. Countries reporting the highest number of new cases are Brazil, Indonesia, and India. India alone produces over half of all new cases, highlighting the disproportionate transmission. Globally, 2 to 3 million individuals are living with Leprosy-related disabilities. Early diagnosis and treatment can prevent morbid complications, allowing those who develop leprosy to live a fully active life [ 10 - 14 ].
Transmission of Leprosy has yet to be entirely understood. Data shows evidence of human, wildlife, and environmental reservoirs that offer transmission pathways for Mycobacterium leprae and Mycobacterium lepromatosis . Individuals residing in close contact with infected leprosy patients were most likely infected via infectious aerosols. Aerosol droplets containing the bacteria are created by sneezing, coughing, and possibly through broken skin-to-skin contact. Once bacteria travel and infect the upper respiratory tract, extensive dissemination within the host can occur [ 1 , 15 ].
In the southern United States, the nine-banded armadillo (Dasypus Novemcinctus) is a proven natural host and reservoir of M. leprae . Identical strains (SNP subtype 3I-2-v15) are found to be passed zoonotically from armadillos to humans when they hunt, handle, or eat these animals. Nevertheless, most people encountering armadillos have a shallow risk of getting the disease. In 2016, M. leprae and M. lepromatosis were found in red squirrels (Sciurus vulgaris) in the United Kingdom. The isolated strain from red squirrels closely relates to the southern United States armadillo strain. Potentially, animal reservoirs relate to environmental exposure by shedding viable M. leprae bacteria. This could explain the stable global occurrence of leprosy. Multidrug treatment only lowers human-to-human transmission [ 15 - 19 ].
In experimental trial settings, M. leprae has been shown to survive inside amebae and amoebic cysts for weeks. This allows the amoebae to function as a vector in transmission. Typically, most people do not develop Hansen's disease following exposure. Various risk factors have been correlated with a higher chance of acquiring leprosy. Leprosy is endemic in regions within countries of Asia, Africa, and North and South America. Being in close contact with an individual with untreated leprosy increases one's chances of contracting the disease compared to the general population. Various studies have proposed that contacts of patients with the pervasive lepromatous leprosy strain have a higher risk than those with the limited tuberculoid leprosy strain. In the southern US, the nine-banded armadillo has been found to be naturally infected and transmit leprosy. Immunosuppression in cases such as HIV, organ transplantation, and chemotherapy increases predisposition to this disease [ 10 - 14 , 19 ].
Genetic variation determining how the human body responds to infection has also been suggested to impact one’s chance of contracting leprosy. The immunologic reaction to leprosy is determined by innate and adaptive immunity. Variations in genes of the NOD2-mediated signaling pathway can impair the innate immune response. Clinical manifestations can vastly vary depending upon the body’s ability to mount an acquired immune response to the bacteria infection. This cellular immune response appears to be regulated by numerous non-human leukocyte antigen (HLA) genes [ 10 - 14 , 19 ].
Microbiology
Mycobacterium Leprae has an affinity for infecting the skin and nerves of the body. Leprosy is a nonfatal contagious disease that is the most common cause of non-traumatic peripheral neuropathy worldwide [ 1 , 8 ]. Mycobacterium Leprae ( M. leprae ) is an intracellular acid-fast bacillus, aerobic and rod-shaped, that can infect humans and other species such as armadillos and primates. It can be transmitted and exit via the skin, specifically the dermis and nasal mucosa. Mycobacterium leprae is not usually found in the epidermis of the skin. Still, according to multiple research, it has been found that there was evidence of Mycobacterium leprae being found in the desquamating epithelium of the skin and the superficial keratin layer of the skin, which strongly proves that this microorganism can survive and multiply alongside the sebaceous glands [ 1 , 4 , 5 , 8 , 10 , 20 ]. Although the exit route of Mycobacterium leprae is known and heavily researched, the entry route is still not defined and needs to be definitively debated about the upper respiratory tract and the skin; according to some recent research, the upper respiratory tract is most likely favored. Recently, some new studies revealed the entry route of Mycobacterium leprae is through the endoneurial laminin-2 isoform and the receptor alpha-dystroglycan; this newly found evidence will shed light on the pathogenesis of peripheral neural damage caused by Leprosy. Alpha dystroglycan is usually associated with the early development and pathogenesis of muscular dystrophy, but in the settings of Mycobacterium leprae , it serves as a receptor for Schwann cells [ 4 , 7 , 16 , 21 ].
Growth and incubation period
Mycobacterium leprae is an obligate intracellular pathogen that affects Schwann cells and macrophages, with a slow-growing time of 14 days. Mycobacteria differ from other bacteria because they have a unique lipid, mycolic acids, that makes up their membranes and gives them unique characteristics. This large hydrophobic cell membrane prevents polar molecules and most drugs from entering the cell [ 1 - 4 , 8 , 10 , 22 ]. The clinical manifestation of leprosy is highly dependent on the host immune response. Based on the host, they can develop either a T-cell-mediated immune response or a humoral-mediated immune response. Because Mycobacterium leprae does not respond to antibodies, patients developing a humoral-mediated immune response develop a much more severe clinical manifestation than those developing a T cell-mediated response. Patients whose immune system develops a T cell-mediated response develop antigen-specific CD4+ T-cells of the Th1 subtype. This has the same similarity as Mycobacterium tuberculosis . TH1 cells secrete cytokines such as IFN-gamma, activating macrophages and enabling them to phagocyte the bacteria. This pathway is described as “Tuberculoid Leprosy.”
Patients whose immune systems developed a humoral response produce antigen CD4+ T-cells of the TH2 subtype, which secretes cytokines such as IL-4 and IL-5. These interleukins stimulate antigen-specific B-cells. Because M. Leprae does not respond to antibodies, these individuals cannot fight this bacteria effectively and present with what is called “Lepromatous Leprosy” [ 1 - 4 , 8 , 10 , 22 ].
Clinical manifestations and diagnosis
The clinical manifestations of leprosy depend upon the body’s cell-mediated immune response. After several years of incubation, leprosy presents slowly with a various spectrum of disease. The most common subtypes are tuberculoid leprosy and lepromatous leprosy. A general presentation of fatigue and fever may exist within both types. The chief signs of leprosy include skin lesions, hypoesthesia, and peripheral neuropathy [ 1 , 10 , 16 ].
Tuberculoid leprosy is a milder form of the disease. It has a better prognosis in most patients but may progress to a more extreme form. Tuberculoid is characterized by painless red or pale lesions with loss of sensation on the face, trunk, and extremities. There is a palpable thickening of peripheral nerves because M. Leprae invades and multiplies within Schwann cells. The significant sensory loss makes patients vulnerable to trauma, infections, or muscle atrophy. With progression, lesions tend to obliterate the standard skin organs such as sweat glands and hair follicles. A vigorous cell-mediated immune response causes phagocytic destruction of the organism but also amplifies allergic reactions [ 2 , 5 , 10 , 16 ].
Lepromatous leprosy, a more severe form, involves widespread skin involvement with many bacteria. Thickening of peripheral nerves with paresthesia is present but lasts longer than seen in tuberculoid leprosy patients. Repeated trauma of the hands and feet allows superinfection to occur with potential ulceration. Through disease progression, facial deformities and paralysis develop. The typical leonine facial appearance results from the loss of eyebrows and eyelashes thickening and enlargement of the nostrils, ears, facial skin, and cheeks. In the late stages of advanced disease, the gradual destruction of the nasal septum causes it to collapse. Involvement of bones, eyes, and other tissues can ensue. A weak cell-mediated immune response allows many organisms to remain viable in the lesions. Indeterminate leprosy, borderline tuberculoid leprosy, mid-borderline leprosy, and borderline lepromatous leprosy are intermediate forms of disease that may progress to either subtype [ 1 , 4 , 7 , 13 , 22 - 26 ].
Neuropathy is a common clinical symptom of leprosy. M. leprae infects peripheral nerves by invading lymphatic and epineural blood vessels. Upon reaching the endoneurium, it grows intracellularly within Schwann cells. M. leprae invades Schwann cells through a cascade of events. It begins by binding to α-dystroglycan, which is a component of the basal lamina. Once inside, it stimulates and attracts macrophages. Infected macrophages begin to produce nitric oxide that destroys axons by causing mitochondrial injury and the initiation of demyelination. Also, the complement system contributes to demyelination seen in patients, called rapid Wallerian degeneration. The bacteria may further promote the spread of infection by reprogramming Schwann cells to the progenitor stem cell stage. Tuberculoid leprosy is characterized by neuropathy of the face, trunk, and extremities. Nerve thickness is prominent and palpable because the bacteria multiply within the nerve sheaths. Impaired sensation exposes the patient to recurrent trauma and secondary infections [ 1 , 4 , 7 , 13 , 22 - 26 ]. Ophthalmic injury occurs in greater than 70% of leprosy patients. Blindness can arise in 5%. Impairment of ocular nerves that control muscles of the eyelids and provide sensory innervation to the cornea may lead to corneal ulceration or abrasion, drying of the cornea, and, most commonly, lagophthalmos. Each patient should be carefully evaluated for damage to the cornea, conjunctiva, and capability to close the eyelids fully. Early detection is vital for patient management [ 1 , 4 , 7 , 13 , 22 - 26 ]. There are two types of leprosy reactions: Type 1 and Type 2. These reactions appear to have different underlying immunologic mechanisms. The development of new lesions during or after treatment completion can usually be attributable to an immunologic response. Type 1 reaction (T1R, reversal reaction) occurs in patients with borderline tuberculoid, mid-borderline, or borderline lepromatous disease. Without treatment, the likely course of T1R is several months. T1R results from a spontaneous heightening of the cellular immune response and delayed-type hypersensitivity to M. leprae antigens. No known risk factors or routine laboratory tests are available to predict which patients may experience this reaction. Therefore, no changes should be made to a patient's treatment routine to avoid a reaction. Type 2 reaction (T2R, erythema nodosum leprosum, ENL) occurs in patients with borderline lepromatous and lepromatous disease. Without treatment, the likely course of T2R is one to two weeks, but it can reoccur over many months. The mechanism of T2R has yet to be fully understood. It is commonly deemed as an immune complex disorder. Risk factors for Type 2 reaction include pregnancy, lactation, and puberty [ 25 , 27 - 31 ].
Treatments
Early clinical diagnosis and treatment are instrumental in reducing the transmission of leprosy and preventing the development of severe complications. Before pharmacological therapy, patients have undergone prednisolone challenge or skin biopsy with PCR testing to assess for known genetic markers of drug resistance. This allowed for a more effective treatment plan that ensured a lower probability of treatment failure. Due to the rising risk of bacterial resistance to therapy, like tuberculosis, the treatment options for leprosy consist of a multidrug approach, precisely, a three-drug regimen. According to the guidelines of the National Hansen’s Disease Program (NHDP), which is also supported by the World Health Organization (WHO), the first-line medications include Dapsone, Rifampin, and Clofazimine. Treatment alternatives (second line) for patients who failed a first-line anti-leprosy treatment or when drug resistance is detected include Ofloxacin and minocycline [ 5 , 11 , 13 , 23 , 26 , 32 - 40 ].
First-line triple therapy with dapsone, rifampin, and clofazimine are the most effective in the treatment of leprosy, but they do carry certain risks. Dapsone contains bacteriostatic activity that inhibits bacterial synthesis of dihydrofolic acid, thereby inhibiting bacterial nucleic acid synthesis and replication. Prior to the imitation of treatment, all patients should be screened for glucose-6-phosphate dehydrogenase deficiency, as dapsone may cause hemolytic anemia in these patients. Other adverse reaction of dapsone includes hypersensitivity syndrome, methemoglobinemia, and agranulocytosis. Moreover, rifampin contains bactericidal activity that inhibits bacterial DNA-dependent RNA polymerase, thereby preventing the elongation of the messenger RNA. The effect impedes RNA synthesis and results in cell death. Some notable drug side effects include Cytochrome P450 activation, hepatotoxicity, drug-induced hepatitis, and thrombocytopenia. In addition to the other agents, clofazimine contains bactericidal and anti-inflammatory activity that binds to mycobacterial DNA, thereby impeding bacterial growth. Some significant drug side effects include red-black skin discoloration, retinopathy, nephrotoxicity, and cardiac arrhythmia [ 5 , 11 , 13 , 23 , 26 , 32 - 40 ].
The second-line treatment is ofloxacin and minocycline for leprosy, which are fundamental to tackling the emerging problem of drug resistance within first-line drugs. Ofloxacin bacteriostatic activity inhibits bacterial topoisomerase IV and DNA gyrase, thereby preventing protein synthesis. Common side effects include headache, tendonitis, peripheral neuropathy, and hepatotoxicity. Minocycline is a fluoroquinolone that contains bactericidal activity that binds to bacterial 30S ribosomal subunit, thereby preventing protein synthesis [ 32 ]. Notable side effects include dizziness, photosensitivity, hypersensitivity reactions, and autoimmune disorders. Although drug resistance can occur with both drugs, they are less common and contain less severe adverse reactions than first-line agents [ 5 , 11 , 13 , 23 , 26 , 32 - 40 ]. The treatment course generally lasts about 12 months, daily or monthly; this depends on the clinical manifestation of the disease course as defined by the Ridley-Jopling classification. Leprosy disease presents clinically in a spectrum, reflecting the organism’s load and a patient’s immune response. The “Ridley-Jopling Classification” is a categorical division of leprosy that combines “the cutaneous, neurological, and biopsy findings, with the immunological capabilities” of the patients. The following are the classifications of leprosy: Tuberculoid (TT), Borderline tuberculoid (BT), Mid-borderline (BB), Borderline lepromatous (BL), Lepromatous (LL), and Indeterminate (I); nonetheless, “the majority of patients fall into a broad borderline category between TT and LL [ 5 , 11 , 13 , 23 , 26 , 32 - 40 ].
A significant complication associated with the treatment of leprosy is the possibility of relapse. Considering the prolonged treatment period, patient non-compliance and adherence to therapy are important risk factors, so patient education and follow-up assessment are essential. While these first and second-line drugs are generally efficient, complications and drug resistance can still arise, especially with dapsone and rifampin. Monitoring for these complications is crucial during the treatment course (Table 1 ) [ 5 , 11 , 13 , 23 , 26 , 32 - 40 ].
Prevention
A recent study examined the effectiveness of using rifapentine to prevent the spread of leprosy among household contacts of those diagnosed with the disease. The trial involved 7,450 individuals aged 10 years or older in China and found that a single dose of rifapentine significantly reduced the incidence of new leprosy cases over four years compared to no intervention. However, the reduction was not statistically significant compared to a single dose of rifampin. These results suggest that postexposure prophylaxis with rifapentine should be administered to household contacts of patients with newly diagnosed leprosy who are aged 10 years or older [ 32 ].
Prevention of leprosy is instrumental in reducing complications and transmission rates. Key preventive measures include early diagnosis and initiation of appropriate treatment, adherence to treatment regimens, prophylaxis for close contacts, contact tracing and surveillance, and health education and community awareness. These methods allow for early detection and treatment and encourage early reporting and treatment-seeking behavior. Implementation of preventive measures is paramount in controlling leprosy and reducing its burden on the communities [ 22 , 23 , 32 , 41 - 43 ]. | The authors wish to acknowledge the Paolo Procacci Foundation for its generous support in the publication process. | CC BY | no | 2024-01-15 23:35:11 | Cureus.; 15(12):e49954 | oa_package/32/94/PMC10765565.tar.gz |
|||
PMC10787665 | 38222235 | Introduction
Ewing sarcoma is a rare solid tumor, with the majority of cases occurring in bone (15%-20% of cases develop in soft tissues) [ 1 ]. It corresponds to the second most common bone neoplasm that mainly occurs in children and young adults, especially in the second decade of life [ 2 ]. This neoplasm can affect any bone; however, the axial skeleton, pelvis, and femur are the most commonly affected sites [ 3 ].
Although distant metastases are present in 25% of cases (most often in lung, bone, and bone marrow), Ewing sarcoma exhibits a high rate of recurrence (over 90%) if treated without systemic therapy, emphasizing an aggressive pattern of this disease with a high potential to develop micrometastases [ 4 , 5 ].
KIT gene somatic mutations with gain-of-function lead to constitutive activation of the tyrosine kinase receptor, playing a key role in carcinogenesis and sustained tumor growth. The presence of KIT mutations in Ewing sarcomas is rare, but much more frequent in other neoplasms, namely mastocytosis [ 6 - 9 ].
We aimed to describe a case of an adult with an Ewing sarcoma with a KIT mutation developed in a local of a previous mast cell proliferation. | Discussion
The cell of origin of Ewing sarcoma is not established, although several hypotheses are currently considered (possible origin on mesenchymal cell, hematopoietic cell, fibroblast, or neural crest) [ 10 ]. Histologically, these neoplasms are characterized by the presence of small, uniform, undifferentiated round cells unlike other types of sarcomas that may exhibit specific differentiation of a lineage [ 11 ]. CD99 expression by Ewing sarcoma cells can be considered a marker, although its specificity is not exclusive to this cancer subtype [ 3 , 12 ]. The genetic hallmark is the translocation involving the EWSR1 gene. In 85%-90% of cases, the chromosomal translocation observed is between chromosomes 11 and 22, t(11, 22)(q24q12), originating from a chimeric fusion gene (EWSR1-FLT1) [ 13 , 14 ]. This encodes an aberrant transcription factor that leads to the proliferation and differentiation of Ewing sarcoma cells. In this case, the characteristics of cells (small, round, and monomorphic, with scantly clear cytoplasm and nucleus with regular contours, without apparent nucleolus), along with immunoreactivity for CD99 and identification of translocation involving the chromosomal region 22q12 (EWSR1 gene) by FISH, support the diagnosis of Ewing sarcoma.
CD117, a transmembrane receptor with tyrosine kinase activity, is encoded by the proto-oncogene c-KIT and is expressed by the majority of hematopoietic cells [ 15 ]. With the maturation of hematopoietic cells, only mast cells retain this receptor [ 16 ]. Other cells, such as melanocytes, germ cells, some epithelial cells, and Cajal cells of the gastrointestinal tract, also express CD117. By binding to its ligand, stem cell factor (SCF), the transmembrane receptor with tyrosine kinase activity induces cell survival, proliferation, and maturation [ 15 ]. Somatic mutations with the gain-of-function of the KIT gene lead to constitutive activation of the receptor, playing a pivotal role in tumorigenesis [ 6 ]. CD117 expression is not a specific marker as it is expressed by several types of neoplasms, such as mastocytosis [ 15 ]. Despite its low specificity, the expression of CD117 and KIT gene mutations are markers of mastocytosis [ 7 ].
Despite the variable expression of CD117 (referred to in up to 65% of cases), KIT gene mutations with gain-of-function were identified only in 2.6% of cases of Ewing sarcoma [ 8 , 9 ]. The case described here illustrates an Ewing sarcoma with a very uncommon presentation that develops secondarily at the site of previous mast cell proliferation after four years of evolution. In common, they share not only the localization but also the expression of CD117. The diagnosis of Ewing sarcoma as an initial lesion is excluded by different histological characteristics and the sudden expansion of the lesion after four years of progressive local inflammatory symptoms. The development of Ewing sarcoma in patients with pre-existing lesions is exceptional. It is described in the literature, mainly in children and young people, in the post-therapeutic context (chemotherapy or radiotherapy) of hematological diseases (non-Hodgkin's lymphomas, diffuse large B-cell lymphoma, and T-cell leukemia). One of the mechanisms involved is the alteration of innate and acquired immunity that fails to detect and eliminate new mutated cells carrying the characteristic translocation of Ewing sarcoma, allowing the initiation of a secondary sarcomatous lesion [ 17 - 19 ].
Multiple KIT gene-activating mutations have been described in mast cell neoplasms. These mutations are frequently found on exons 11 and 17, with the D816V mutation on exon 17 occurring in the majority of cases. Mutations involving exon 10, which involves the transmembrane domain, are rare and sporadically reported in adult mastocytosis. The M541L mutation on exon 10, present in the case reported here, was described in the literature in mast cell neoplasms for the first time in 2010 [ 20 ]. As far as we know, there are no previous reports describing the KIT M541L mutation involving specifically exon 10 in Ewing sarcoma. | Conclusions
Despite the low specificity of CD117, its expression and KIT gene mutations are markers of mastocytosis. Expression of CD117 is variable in Ewing sarcoma, but KIT gene mutations with gain-of-function were identified only in 2.6% of these patients. In this case, the detection of a KIT gene mutation in Ewing sarcoma developed secondarily at the site of previous mast cell proliferation raises the hypothesis of a possible sarcomatous evolution of the original lesion. To the best of our knowledge, this is the first report describing the KIT M541L mutation (exon 10) in Ewing sarcoma. | KIT gene mutations in Ewing sarcomas are rare; however, they are much more frequent in other neoplasms, namely mastocytosis. We describe a case of an adult male with a one-year duration of recurrent episodes of pain, swelling, and redness on the proximal phalanx of the third finger of his right hand. A core biopsy suggested a possible mastocytosis. After four years of recurrent episodes and worsening symptoms, an incisional biopsy revealed an Ewing sarcoma with a KIT gene mutation (M541L, on exon 10). KIT gene mutations with gain-of-function were identified in 2.6% of Ewing sarcomas. In this case, the detection of a KIT mutation in an Ewing sarcoma developed at the site of previous mast cell proliferation raises the hypothesis of a possible sarcomatous evolution of the original lesion. To the best of our knowledge, similar cases are not described in the current literature. This is also the first report describing the KIT M541L mutation (exon 10) in Ewing sarcoma. | Case presentation
A male in his early 30s presented in 2016 with pain on the proximal phalanx of the middle finger of the right hand associated with redness and swelling, with a one-year duration. Blood analysis was normal for a complete blood count, renal and liver function, electrolyte balance, C-reactive protein, and tryptase. Magnetic resonance imaging (MRI) of the right hand found an abnormal morphology at the base of the first phalanx of the middle finger and an oval area measuring 7 mm. In addition, there was a focal area of rupture of cortical bone and an increase in adjacent soft tissue volume. There was no onion peel periosteal reaction (Figure 1 , Panel A). Bone scintigraphy demonstrated two areas of hyperactivity on the first phalanx of the middle finger of the right hand (Figure 1 , Panel B). A core biopsy was performed, and histological examination revealed a few aggregates of small- to intermediate-sized cells, epithelioid to fusiform, with immunoreactivity only for CD117, suggesting a possible mastocytosis. Systemic mastocytosis could not be confirmed by subsequent bone marrow evaluations or by any other organ involvement.
Due to recurrent episodes of pain, swelling, and redness at the same site refractory to antihistamines, topical and oral corticosteroids, and sodium cromoglycate, the case was discussed again at a multidisciplinary team consultation in 2018, and local corticosteroid administration was initiated. However, the patient maintained recurrent attacks without change in frequency or severity. In 2020, the pain worsened, and a rapidly growing swelling appeared at the same location.
MRI demonstrated a diffuse infiltration of the bone marrow of the entire proximal phalanx of the middle finger of the right hand, with the longest diameter in the axial section of 25 mm. It also showed a complete replacement of the normal pattern of adipose signal and two hyper-uptake areas: the base of the phalanx and its distal third. Areas of cortical bone discontinuity and exuberant soft tissue involvement were also visualized (Figure 2 ). 18-Fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) revealed very intense uptake in the proximal phalanx of the middle finger of the right hand, with a maximum standardized uptake value (SUVmax) of 11.3. There was no sign of distant metastasis.
According to these findings, the patient was submitted to an incisional biopsy. Histology demonstrated dense cellular proliferation organized in solid nodules of small, round, and monomorphic cells with scanty, clear cytoplasm and a nucleus with regular contours without an apparent nucleolus (Figure 3 , Panel A). Immunohistochemistry (IHC) revealed immunoreactivity for CD99, NKX2.2, and CD117 (Figure 3 , Panels B and C). A translocation involving the chromosomal region 22q12 (EWSR1 gene) was detected by fluorescence in situ hybridization (FISH) (Figure 3 , Panel D). The KIT gene mutation (M541L, on exon 10) was detected by next-generation sequencing (NGS).
The patient started neoadjuvant treatment with a combination of alternating chemotherapy every three weeks. This regimen included vincristine, doxorubicin, and cyclophosphamide (VAC; vincristine at a total dose of two mg on day one, doxorubicin 75 mg/m 2 on days one and two, and cyclophosphamide 1200 mg/m 2 on day one) alternating with ifosfamide plus etoposide (IE; ifosfamide 1800 mg/m 2 infused over one hour and etoposide 100 mg/m 2 infused over two hours, both daily, for five consecutive days). After completing four preoperative cycles with good clinical and analytical tolerance, an MRI was performed. It showed a dimensional regression of the initial lesion, now with the longest diameter in the axial section of 17 mm, consistent with clinical improvement (Figure 4 ).
Surgery was performed with the amputation of the middle finger of the right hand without intra- or postoperative complications. Histology revealed a ypT1N0 stage, with tumor necrosis superior to 90% and clear surgical margins. The patient restarted systemic treatment, completing a total of eight postoperative cycles.
After two years since the last systemic treatment, the patient has no evidence of recurrent malignant disease. He maintains regular medical visits every three months for clinical history, physical examination, blood analysis, and imagiological reevaluations (with 18F-FDG PET/CT). The patient remains asymptomatic and maintains independence for all daily and labor activities. | CC BY | no | 2024-01-15 23:41:51 | Cureus.; 15(12):e50537 | oa_package/26/07/PMC10787665.tar.gz |
|||
PMC10787667 | 38222133 | Introduction
Renal transplantation is a preferred treatment for end-stage renal disease and greatly enhances quality of life [ 1 ]. However, antibody-mediated rejection poses a significant challenge to long-term graft survival and can lead to graft dysfunction or loss [ 2 ]. Antibody-mediated rejection encompasses various types, including hyperacute, acute, and chronic rejection [ 3 , 4 ]. Hyperacute rejection, a rare occurrence in renal allograft transplantation, is typically triggered by preformed HLA antibodies. This can be prevented by conducting pretransplant T- and B-cell cross-matching [ 3 , 5 - 7 ].
Recent evidence suggests that endothelial/non-HLA antibodies, which are not routinely detected through standard cross-match methods, may also play a role in mediating rejection [ 7 , 8 ]. Despite negative results in prospective T- and B-cell cross-matching and the absence of endothelial/non-HLA antibodies, hyperacute rejection can still manifest. While management strategies such as plasmapheresis and high-dose intravenous immunoglobulin have shown efficacy in preventing and treating antibody-mediated rejection [ 9 - 11 ], the lack of a standardized therapeutic approach remains a significant concern regarding graft loss. | Discussion
The occurrence of hyperacute rejection is an unusual event in this modern era due to worldwide pretransplant cross-matching to detect pre-existing donor-specific antibodies. Non-HLA antibodies have been reported as responsible in the absence of HLA antibodies [ 12 ], and they can be identified through non-HLA antibody testing before allograft transplantation. Histopathology typically reveals arteritis, interstitial edema, and severe cortical necrosis, which may necessitate transplant nephrectomy [ 3 ].
Therapeutic approaches involve antibody depletion through plasmapheresis; immunomodulation with intravenous immunoglobulin; T-cell depletion using anti-thymocyte globulin; immunosuppression with mycophenolate mofetil, tacrolimus, and prednisone; and targeting the terminal complement pathway with eculizumab. No definitive treatment exists, and combinations of therapies are employed to optimize hyperacute rejection, particularly in highly sensitized patients before transplantation.
This case illustrates that conventional pretransplant cross-matching may not always prevent hyperacute rejection; however, graft salvage is possible through immediate graft extraction and aggressive immunotherapy, as outlined above. The use of eculizumab in combination with conventional antibody-mediated rejection therapy can prevent early graft loss [ 13 , 14 ], but randomized studies are necessary to assess its extensive utilization. Nonetheless, the high cost limits its use. Vaccination against meningococcus and pneumococcus is mandatory before initiating eculizumab.
Despite extensive investigation in our case, the etiology remained unidentified. While advanced techniques aid in detecting HLA and non-HLA donor-specific antibodies, alloimmune response continues to pose a significant challenge [ 13 ]. Further research is needed to enhance the efficacy of pharmacotherapy and gain a better understanding of the pathophysiology, diagnosis, and treatment of hyperacute rejection. | Conclusions
In this specific case, an innovative strategy is introduced to address hyperacute rejection, a condition that is not commonly encountered and currently lacks any established methods of treatment. This novel approach holds significant promise in effectively addressing the unique challenges presented by hyperacute rejection. The swift removal of the transplanted organ, along with intensive immunotherapy and the application of eculizumab, showed positive results in effectively managing this condition. The progress observed in this specific case serves as evidence for the favorable outcomes. To verify the effectiveness and potential implications, it is crucial to conduct thorough investigations and further research. | Hyperacute rejection is a rare complication of renal transplantation. It is mainly caused by preformed human leukocyte antigen antibodies and can lead to the loss of the transplanted kidney. Renal transplantation is a highly beneficial treatment for people with end-stage renal disease, greatly improving their quality of life. However, antibody-mediated rejection is a significant challenge for the long-term survival of transplanted kidneys.
An 18-year-old male with nephrotic syndrome, who underwent bilateral renal nephrectomy due to severe proteinuria, received a living donor kidney. Pretransplant panel reactive antibodies were low. Cytotoxic T- and B-cell and non-HLA cross-match was negative. The graft became cyanotic and mottled within half an hour of transplantation. Allograft was quickly extracted, and a biopsy showed hyperacute rejection. The patient was treated with plasmapheresis, intravenous immunoglobulin, and eculizumab. The graft was successfully re-implanted after 18 hours. Further treatment included additional sessions of plasmapheresis, intravenous immunoglobulin, eculizumab, T-cell-depleting agent, and immunosuppressive therapy. Serum creatinine became stable, and renal biopsy after one month demonstrated intact parenchyma with no inflammation or fibrosis.
This case highlights the critical importance of promptly removing the transplanted kidney and using aggressive immunotherapy to save renal allografts in cases of hyperacute rejection. | Case presentation
An 18-year-old male, with a significant past medical history of focal segmental glomerulosclerosis, underwent bilateral native nephrectomy due to severe proteinuria. Subsequently, he received a living donor kidney transplant from his mother. The patient had a heterozygous Nephrin mutation, which was the underlying cause of focal segmental glomerulosclerosis. Two weeks before the transplant, the pretransplant calculated panel-reactive antibodies were 16%, which increased to 56% on the day of the transplant but subsequently decreased to less than 1%. The cytotoxic T- and B-cell cross-match results were negative, and the patient did not develop donor-specific anti-HLA antibodies.
The initial perfusion of the transplanted kidney appeared satisfactory, but within 30 minutes, the graft displayed cyanosis and a mottled appearance despite exhibiting good intra-graft Doppler ultrasound signals. The transplant kidney was promptly devascularized, extracted, and flushed with the University of Wisconsin solution. A frozen section of a wedge biopsy revealed the presence of neutrophilic glomerulitis and small arteriolar thrombi, consistent with hyperacute rejection (Figures 1 , 2 ).
The allograft was subjected to hypothermic machine perfusion overnight, and the pump parameters indicated favorable results (flow: 110 mL/minute, RI: 0.24). To address the rejection, the patient received treatment comprising plasmapheresis, intravenous immunoglobulin (400 mg/kg), and eculizumab (1,200 mg) to inhibit complement activation. After 18 hours, the allograft was successfully reimplanted. Following the surgery, the patient underwent five sessions of plasmapheresis every other day, along with administration of eculizumab (600 mg), intravenous immunoglobulin (200 mg/kg), and rabbit anti-thymocyte globulin (total dose of 7.5 mg/kg). The patient’s immunosuppression regimen included prednisone, mycophenolate mofetil, and tacrolimus.
Serum creatinine levels steadily improved, reaching below 4 mg/dL within one week and stabilizing at 2.0-2.3 mg/dL at three months. A repeat allograft biopsy conducted at one month indicated intact parenchyma without any signs of inflammation or fibrosis, and C4d staining in peritubular capillaries was negative (Figure 3 ). A retrospective T- and B-cell flow cross-match performed on a sample from the day of the transplant yielded negative results. Additionally, endothelial cross-match and non-HLA antibody testing, including angiotensin II type 1 receptor and major histocompatibility complex class I-related chain A, showed negative results. Genetic screening for complement regulatory mutations did not provide a diagnostic outcome. | The authors greatly acknowledge the contributions of the admirable individuals who donated kidneys. | CC BY | no | 2024-01-15 23:41:52 | Cureus.; 15(12):e50538 | oa_package/35/50/PMC10787667.tar.gz |
||
PMC10787668 | 38217785 | Introduction
Among health professionals, nurses constitute the largest workforce. Thus, it is vital to increase the quality of service that nurses provide for positive patient outcomes [ 1 ]. However, nursing is generally accepted as a high-risk profession in terms of burnout and work-related stress, with nurses in certain specialties experiencing particularly high levels of stress [ 2 , 3 ]. Hospital-employed nurses have higher rates of mental health challenges than the general population [ 4 ]. Particularly, depression, anxiety, and stress are rated high, reducing nurses’ quality of life (QoL) ratings [ 5 ]. As nurses work in interdependent settings, these mental health and QoL concerns could have serious implications for patients, other healthcare professionals, and healthcare organizations at large.
In addition, the rapid introduction of technological developments in healthcare systems adds another layer of complexity to the already demanding jobs of nurses, particularly for those working in perioperative care. Robotic-assisted laparoscopic surgery has changed the physical and interpersonal context of surgical teams compared to pure laparoscopic surgery, potentially impacting nurses’ job satisfaction as well as subsequent patient outcomes. Robotic-assisted and pure laparoscopic surgery nursing differ from each other in some aspects. Robotic-assisted surgery nurses are responsible for preparing the robotic surgical system and controlling it during surgery. These nurses have the knowledge of sterile and non-sterile parts of the robot. Their responsibilities include checking the patient’s position before and during surgery, placing surgical instruments on robotic arms, applying relevant procedures in an emergency situation, monitoring and interpreting the information in the system to keep the patient safe [ 6 , 7 ]. Finally, they keep surgical materials available for the possibility of conversion to laparoscopic or open surgery. Due to the complexities introduced by new technology, robotic surgery nurses perform varied, specialized tasks that laparoscopic surgery nurses do not perform. Yet, despite the changing landscape of work and increased responsibilities, there is scarcity of research examining the effects of new technologies on nurses’ job satisfaction.
Job satisfaction refers to a person’s attitudes toward work, including their emotional states when they reach their work-related goals expectations in work life [ 8 ]. Satisfaction with the work environment has implications for employees’ relationships as well as their own psychological well-being.
The main objective of this study is to compare the job satisfaction of nurses in robotic-assisted laparoscopic and pure laparoscopic surgery. We also examine whether two groups of nurses differ in terms of their psychological well-being (i.e., depression) and QoL ratings. | Materials and methods
This study was approved by the Institutional Ethics Committee (Approval #: E1-20-356) and performed in accordance with the ethical standards stated in the 1964 Declaration of Helsinki. Informed consent was obtained from all participants. This cross-sectional study was based on a paper–pencil survey conducted from June 2020 through September 2020.
A total of 101 perioperative nurses who had been working in robotic-assisted laparoscopic ( n : 51; 41 female, 10 male) and pure laparoscopic ( n : 50; 40 female, 10 male) surgery in six different centers (3 government and 3 private hospitals) were included. Participants were licensed registered nurses with at least 1 year of employment at their current institution.
Measures
Our primary outcome is job satisfaction, whereas the secondary outcomes are psychological well-being and quality of life ratings. Accordingly, participants filled out the Minnesota Satisfaction Questionnaire (MSQ), Beck Depression Inventory (BDI), and SF-36 QoL Survey.
The short version of the MSQ was used to measure job satisfaction among nurses. This questionnaire is a 20-item self-report measure that examines two aspects of job satisfaction: (1) intrinsic satisfaction (i.e., how employees feel about the nature of their job tasks), (2) extrinsic satisfaction (i.e., how employees feel about aspects of the work situation that are external to the job tasks such as work conditions). Each sub-scale consists of ten items scored on a five-point scale ranging from 1 ( very dissatisfied ) to 5 ( very satisfied ). The total score obtained from adding intrinsic and extrinsic satisfaction sub-scores indicates overall job satisfaction. The overall score ranges from 20 to 100 such that scores ranging from 20 to 47 indicate low job satisfaction, 48–76 indicate moderate job satisfaction, and 77–100 indicate high level of job satisfaction [ 9 ].
The BDI is a 21-item scale measuring various symptoms of depression. It has 21 items addressing somatic and affective aspects of depression. Each item consists of four alternative responses rated from 0 to 3 according to the severity of the symptom ( 0 = non-existent; 3 = severe ). Participants were asked to choose the response closest to their state during the past week. Participants’ responses to 21 items are added to compose a depression score, with higher scores indicating higher levels of depression. Individual scores were from 0 to 66. Scores 1–10 indicate no depression, 11–16 indicate mild mood disturbance, 17–20 indicate borderline clinical depression, 21–30 indicate moderate depression, 31–40 indicate severe depression,and scores over 40 indicate extreme depression [ 10 ].
The SF-36 QoL comprises 36 questions covering eight aspects of health status: physical functioning, role-physical (role limitations due to physical health problems), bodily pain, general health, vitality, social functioning, role-emotional (role limitations due to emotional problems), and mental health. The scores of questions relating to each scale were summed and rescaled to a 100-point scale, where 100 is the best possible score and 0 the worst possible score [ 11 ].
A paper–pencil survey was distributed to the participants by three of the authors (D.N.T., A.P. and M.T.) at National Urology Nurses Society’s Annual Meeting in 2020 as well as National Surgery and Perioperative Nurses Society’s Annual Meeting in 2020. | Results
The mean age of the participants was 34.8 (23–51) years. 80.2% of the participants were female and 19.8% were male. Moreover, most participants (88.1%) had a bachelor’s degree. The majority of the participants (62.4%) had more than 10 years of work experience. There were no differences between perioperative nurses who had been in robotic-assisted laparoscopic and pure laparoscopic surgery regarding their demographic parameters. The demographic data of the participants are summarized in Table 1 .
We first examined the effects on our primary dependent variable, namely job satisfaction. The results indicated that 21.8% of nurses had low levels of job satisfaction, 65.3% had moderate levels of job satisfaction, and 12.9% had high levels of job satisfaction. We did not find significant differences between the groups in terms of their total MSQ score ( p : 0.066). In addition, intrinsic and extrinsic job satisfaction sub-scores of MSQ were not significantly different between the groups ( p intrinsic : 0.473, p extrinsic : 0.121).
Then, we examined the effects on our secondary dependent variables, namely BDI and quality of life. BDI scores indicated that 39.6% of nurses had no depressive symptoms, 31.7% had mild mood disturbance, 9.9% had borderline clinical depression, 13.9% had moderate depression, and 5% had severe or extreme depression. There were no significant differences between the groups in their BDI scores ( p : 0.329). Finally, for SF-3 6 QoL, mean physical functioning ratings appear to be on the higher end of the scale (mean = 82.13, SD = 17.99). Participant ratings were generally at moderate levels on other aspects such as energy–vitality, mental health, and role limitations related to emotional problems. Finally, two groups of nurses did not significantly differ in terms of their QoL ratings (p:0.136). The results are shown in Table 2 . | Discussion
Nurses who are overworked likely experience negative emotional states and adverse health effects due to burnout that in turn may reduce their performance and quality of care [ 1 , 12 , 13 ]. In addition, adoption of new technologies in the operating room could be motivating for perioperative nurses, yet may place additional job demands that could be difficult to manage. Perioperative nurses have varied responsibilities, including ensuring that they are correctly ‘scrubbed up’, preparing the instruments, trolleys and sterile supplies needed for the surgery, maintaining a sterile environment, preparing the patient, providing skilled assistance to the surgeon during the operation, and performing the swab/instrument count at the end of the procedure [ 14 ]. In addition to providing quality patient care, operating room nurses should be an effective team member who could work with multiple healthcare professionals [ 15 ].
Intense workload of operating room nurses likely increases stress, burnout, and anxiety, and decreases their job satisfaction. For example, a study by Boyle et al. investigated the job satisfaction of 55,516 registered nurses in 206 hospitals in the USA [ 16 ]. They found that job satisfaction varied by work unit and that perioperative nurses were least satisfied with their jobs due to the unique demands of their work environment. Given these considerations, it is essential to examine the well-being and job satisfaction of perioperative nurses and understand whether and how the adoption of new technologies influences not only their job demands, but also their psychological outcomes [ 13 ]. Ultimately, the quality of patient care rests on the well-being and job satisfaction of nurses in the operating room [ 12 ].
The current study revealed that perioperative nurses in general were moderately satisfied with their jobs. Between the robotic-assisted and pure laparoscopic surgery nurses, overall job satisfaction scores did not significantly differ. We also did not find significant differences in the intrinsic and extrinsic job satisfaction scores of two groups which requires further attention considering recent research findings. Intrinsic job satisfaction refers to the satisfaction gained from actual job tasks [ 17 , 18 ]. For example, finding meaning in one’s contributions or feeling a sense of achievement due to being a part of new initiatives are sources of intrinsic job satisfaction. A recent review of qualitative research studies on the experiences of robotic-assisted laparoscopic surgery nurses revealed that these nurses expressed a positive attitude toward incorporating the latest surgical innovations into their daily practices and that they were proud to be part of a team that employs this latest technology [ 19 ]. Accordingly, it could be expected that perioperative nurses who work in robotic-assisted laparoscopic surgery would experience higher intrinsic job satisfaction than those who work in pure laparoscopic surgery due to the novelty and usefulness of this new technology. However, we did not find such a difference in our data.
In addition, perioperative nurses working in robotic-assisted laparoscopic surgery also voiced concerns related to increasing importance of teamwork, shifting job demands, changes in workload, and intense training requirements [ 19 , 20 ]. These factors relate to extrinsic job satisfaction, which involves satisfaction related to external factors such as working conditions, relationships with co-workers and salary [ 17 , 18 , 21 ]. While nurses working in robotic-assisted laparoscopic surgery could be expected to experience lower extrinsic job satisfaction than those working in pure laparoscopic surgery due to the challenges related to working with new technology, we did not find a difference between the groups. We should note that two groups of nurses are not paid differently in our context, partially accounting for the lack of difference in their extrinsic satisfaction. We call for future research to further examine how new technologies affect perioperative nurses’ job satisfaction, especially when new job demands result in increased pay.
Nurses spend most of their working time interacting directly with patients and/or their relatives. In addition, nurses often witness tragic instances, including illness, trauma, and even death, which could be physically demanding and psychologically stressful. Negative psychosocial factors in the working environment can adversely affect the psychological and physical well-being of nurses [ 22 ]. Welsh found that 35% of surgical hospital nurses scored above the cutoff for mild to moderate depressive symptoms [ 23 ]. In our study, 18.9% of the nurses reported having moderate to extreme depressive symptoms. We did not find a statistically significant difference in terms of BDI scores between two groups of nurses. This finding indicates that the type of surgical environment does not relate to nurses’ mood states.
QoL is expressed in terms of an individual’s sense of satisfaction, which consists of factors such as work quality, satisfaction with personal life, and having financial independence. Welsh reported that work attributes including appropriate supervision, cooperation, and relationships with patients play a role in the QoL ratings of nurses [ 23 ]. A study by Orszulak et al. found the QoL level of nurses to be around the mid-point of the scale [ 24 ]. The nurses in their study reported the best QoL rating in the psychological domain and the worst in the physical domain. In our study, QoL ratings were also at moderate levels. However, QoL ratings in the physical domain were higher than those in other domains. We again did not find significant differences between two groups of nurses in their QoL ratings. This result suggests that the type of surgical environment does not relate to the QoL perceptions of perioperative nurses.
Perioperative nurses in our sample work in either public or private hospitals. We should note that our healthcare system is highly standardized, with minimal differences in working hours, work conditions and expectations for perioperative nurses working in public and private hospitals. Given these similarities, we do not expect the work setting to impact the variables of interest in this study (i.e., MSQ, BDI, and SF-36 QoL). In addition, Kaushik et al. revealed that the prevalence of depression, anxiety and ratings of work stressors were comparable for nurses working in public and private settings [ 25 ]. Based on these findings, we do not expect the work context to influence on our results.
To our knowledge, this study is one of the first studies to compare job satisfaction, psychological well-being and QoL perceptions of nurses who work in robotic-assisted and pure laparoscopic surgery. We found that there were no differences between the groups in terms of these variables. Generally, these findings indicate that, regardless of the workload and work context, attention should be paid to enhancing the well-being of nurses to enhance effectiveness of patient care.
A few limitations of this study must be noted. First, this study includes a narrow group of nurses in our national healthcare system, including nurses from six hospitals (three public and three private) in different regions. While we do not have a specific reason to expect different results depending on the region and type of hospital, caution is needed in generalizing the results to other settings. Second, the cross-sectional nature of the study should be considered when interpreting the results. | Conclusion
Our results show that job satisfaction, psychological well-being and QoL ratings were similar between perioperative nurses who work in robotic-assisted and pure laparoscopic surgery. In our sample, 18.9% of the nurses reported having moderate to extreme depressive symptoms and most of them (87.1%) had low to moderate levels of job satisfaction. Finally, QoL ratings were generally at moderate levels. While the QoL and psychological well-being ratings could be impacted by factors outside of work, healthcare systems should focus on increasing nurse satisfaction to improve the quality of patient care. | The rapid introduction of technological developments into healthcare systems adds another layer of complexity to the already demanding jobs of nurses, particularly for those working in perioperative care. In the present study, our primary aim is job satisfaction, whereas the secondary outcomes are psychological well-being and quality of life (QoL) ratings of perioperative nurses who take part in robotic-assisted and pure laparoscopic surgery. A total of 101 perioperative nurses in six different centers were included in the study. Fifty-one of the nurses were working in robotic-assisted laparoscopic surgery and 50 of them were working in pure laparoscopic surgery. All participants responded to Minnesota Job Satisfaction Questionnaire (MJSQ), Beck Depression Inventory (BDI) and SF-36 QoL Measurement Survey. The two groups did not differ in their total MJSQ, BDI and SF-36 QoL scores ( p MJSQ :0.066, p BDI :0.329, p SF-36-QoL :0.136). In addition, there were no differences between the two groups in their intrinsic job satisfaction and extrinsic job satisfaction sub-scores ( p intrinsic : 0.473, p extrinsic :0.121). Overall, 18.9% of the nurses reported having moderate to extreme depressive symptoms and most of them (87.1%) had low to moderate levels of job satisfaction. Finally, QoL ratings was generally at moderate levels. Perioperative nurses who work in robotic-assisted laparoscopic surgery do not differ from those working in pure laparoscopic surgery in terms of their job satisfaction, psychological well-being, and QoL ratings. In addition, across groups’ psychological well-being, job satisfaction, and QoL ratings were not particularly high, suggesting that more attention needs to be paid to improving the work conditions of perioperative nurses.
Keywords
Open access funding provided by the Scientific and Technological Research Council of Türkiye (TÜBİTAK). | Statıstıcal analysıs
Statistical analysis was performed using the Statistical Package for the Social Sciences (SPSS, Chicago, IL) version 28.0.1. Descriptive statistical data for continuous variables were expressed as mean and standard deviation. T tests were conducted to compare the two groups in their job satisfaction, psychological well-being, and quality of life scores. A p value of less than 0.05 was considered significant when testing the differences between the nurses in their ratings. | Author contributions
All authors contributed to the study conception and design. DNT, ET, MB, AT: contributed to the conception and design of the study. DNT, AP, MT: collected data. MB, TK, OG, YA, AT: provided close supervision during the study. ET: worked on data analysis and interpretation. DNT, ET, MB, OG, TK, AT: involved in revising the paper critically to strengthen the content. DNT, ET, AP, MT, MB, TK, OG, YA, AT: agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All authors read and approved the final manuscript. DNT and ET contributed equally to this work.
Funding
Open access funding provided by the Scientific and Technological Research Council of Türkiye (TÜBİTAK). The authors declare that no funds, grants, or other support was received during the preparation of this manuscript.
Data availability
The data that support the findings of this study are available from the corresponding author, upon reasonable request.
Declarations
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose.
Ethical approval
This study was performed in line with the principles of the Declaration of Helsinki. Approval was granted by the Ethics Committee of Ankara City Hospital (Date: 27 February 2020/No: E1-20-356).
Consent to participate
Informed consent was obtained from all individual participants included in the study.
Consent to publish
All patients were given complete information on the risks and benefits of the procedure and gave their written consent. | CC BY | no | 2024-01-15 23:41:52 | J Robot Surg. 2024 Jan 13; 18(1):19 | oa_package/49/67/PMC10787668.tar.gz |
PMC10787671 | 37782382 | Background
Colorectal cancer (CRC) is the third most common cancer globally, with incidence rates that positively correlate with Human Development Index levels (HDI), a measure of societal development based on the average health, education, and income of a population [ 1 , 2 ]. Several CRC risk factors are associated with socioeconomic development including smoking, physical inactivity, unhealthy diets, and excess body weight [ 3 ]. Incidence rates are declining among older adults in high-income countries due to screening and early removal of precursor lesions but increasing among young adults (age < 50 years) [ 4 ]. The reason for this increase is unknown, but lifestyle exposures in childhood and adolescence are considered drivers [ 5 ]. Economically transitioning countries are seeing a rapid increase in CRC incidence, albeit from low levels, reflecting a global shift toward more Westernized lifestyles [ 6 ].
Less is known about the impact of lifestyle factors on CRC prognosis. Smoking [ 7 , 8 ], physical inactivity [ 9 , 10 ], and unhealthy diets [ 11 , 12 ] have all been associated with increased mortality in CRC patients. The impact of overweight and obesity on CRC survival is, however, highly debated [ 13 ]. Some studies have reported improved survival among CRC patients with excess body weight compared to normal weight patients [ 14 ], the so-called obesity paradox [ 15 ]. Others report worse CRC-specific outcomes in the obese group [ 16 , 17 ], and yet other studies find no associations between body mass index (BMI) and survival [ 18 ]. Reverse causality due to illness-induced weight loss may account for these differences, highlighting the importance of timing when assessing body weight [ 19 ].
Few studies have examined the combined effects of lifestyle factors on CRC recurrence and CRC-specific survival [ 20 – 23 ]. Previous studies have reported conflicting results, which may reflect methodological differences, including the timing of exposure assessment (pre/post-diagnosis), the use of different lifestyle scores, and differences in study populations. This study aimed to investigate the associations of the combined impact of pre-diagnostic modifiable healthy lifestyle factors, including avoidance of smoking, moderate to high levels of physical activity, high adherence to a healthy diet, and BMI within the healthy range, with CRC recurrence and overall survival in a cohort of Swedish CRC stage I-III patients. | Methods
Study design
The Colorectal cancer low-risk study cohort consists of more than 3300 participants diagnosed with all-stage CRC in 14 hospitals in Middle Sweden from 2003 to 2009, as described elsewhere [ 24 ]. Participants were consecutively recruited or identified using data provided by Regional Oncologic Centers. The latter received letters of invitation to participate in the study, and those interested were contacted over the telephone for informed consent and inclusion. A subset of participants included in 2004–2006 received a self-administered questionnaire on lifestyle habits ( n = 1767), with a response rate of 93% ( n = 1639).
We conducted a cohort study including participants from the Colorectal cancer low-risk cohort with stage I–III CRC who had completed the lifestyle questionnaire. A healthy lifestyle was the exposure of interest and recurrence-free survival (RFS) was the primary outcome of this study, using overall survival (OS) as a secondary outcome.
Participants
Participants with a radically resected adenocarcinoma of the colon or rectum that had surgery in 2003–2006 were eligible for inclusion. Stage IV CRC patients were excluded due to dismal prognosis, as were participants with unavailable patient files or missing data on the American Joint Committee of Cancer (AJCC) TNM stage. Patients were staged according to version 5 of the AJCC TNM [ 25 ].
Two investigators (S.B and P.R) collected treatment and follow-up data for the participants during the years 2017–2020, including date of surgery, American Society of Anesthesiologists (ASA) classification, oncological treatment (neoadjuvant radiotherapy/adjuvant chemotherapy), time to CRC recurrence, time to last recurrence-free follow-up visit, and time to all-cause death.
Exposure assessment
A semiquantitative questionnaire was used for the collection of information on smoking, physical activity, and anthropometric markers. Participants were asked to report their cigarette smoking status and history including the number of cigarettes per day and duration of smoking for current and ever-smokers. Data on physical activity including leisure time exercise was collected using a validated set of questions with 5 pre-defined duration categories ranging from less than 1 h to more than 5 h/week, a validated method for assessing physical activity [ 26 ]. Self-reported weight 5 years before diagnosis was used to calculate BMI by dividing the weight in kilograms by the square of height in meters. Weight 5 years prior to diagnosis was chosen to minimize the risk of reverse causation, as CRC can induce weight loss.
The lifestyle questionnaire included a food frequency section designed to assess a typically Swedish diet. Participants were asked to report serving size and average intake frequency of 96 commonly eaten foods and beverages 5 years before diagnosis. A similar validated questionnaire, where participants report eating habits over the last year, has been used in previous studies [ 27 , 28 ].
Mediterranean diet score
The Mediterranean diet (MD) is one of the most scientifically evaluated dietary patterns in the field of nutritional epidemiology [ 29 , 30 ]. Several studies have reported inverse associations between MD adherence and CRC risk and mortality [ 31 – 35 ]. A diet adhering to the MD pattern was considered healthy in this study.
We used the modified Mediterranean diet scale (mMED) defined by Tektonidis et al. and developed further by Larsson et. al to compute a diet variable [ 36 , 37 ]. This is a modification of the Mediterranean diet scale originally constructed by Trichopoulou, to better suit the intake habits of the Swedish population. The mMED score was created by categorizing the intakes of the following 6 food groups into quintiles: vegetables and fruits, legumes and nuts, whole grains, fish, dairy products, and red and processed meats. Participants received a score from 1 to 5 for being in the first five groups’ lowest to highest quintiles of intake. The score was reversed for the last group, red and processed meats, assigning 5 points to the lowest quintile. The use of olive- or rapeseed oil was assigned 5 points; conversely, 1 point was assigned for non-use. The mMED has previously included intakes of alcoholic beverages. However, the health effects of moderate alcohol consumption are widely debated [ 38 ], prompting us to exclude alcohol consumption from the score. The total mMED score thus ranged from 7 (low adherence) to 35 (high adherence).
Healthy lifestyle and BMI score
A healthy lifestyle and BMI (HL) score was created by dichotomizing each of the four lifestyle variables into a pre-defined healthy and less healthy/unhealthy alternative [ 39 ]. Never smokers and former smokers with > 1 year of cessation time were considered non-smokers, as opposed to current and former smokers with ≤ 1 year of cessation. Current but not former smoking has been associated with poorer CRC-specific survival [ 7 ]. According to the WHO recommendations for adults, participants with ≥ 150 min/week of leisure time exercise were considered physically active, versus < 150 min/week [ 40 ]. A low-risk diet was defined as an mMED score > the cohort median, versus an mMED score ≤ cohort median. Participants with a BMI of 18.5–24.9 were considered to have healthy body weight, as opposed to those with underweight (BMI < 18.5) and pre-obesity or obesity (BMI ≥ 25.0 m 2 ), according to the WHO classification [ 41 ]. One point was allocated for each healthy lifestyle factor, and 0 points for the less healthy or unhealthy alternative. The total score thus ranged from 4 (most adherent to a healthy lifestyle) to 0 (least adherent).
Outcome assessment
CRC recurrence was defined as locoregional recurrence, distant metastasis, or the occurrence of a new colorectal tumor. Observation time started on the date of curative surgery and ended on recurrence or the date of the last follow-up visit to the surgical or oncological clinic. In the OS analysis, participants were observed from the date of curative surgery to the date of all-cause death or the last known date of contact.
Statistical methods
We categorized participants into four groups based on HL points. Those with an HL score of 0 and 1 were combined into one group due to low numbers in the former category ( n = 20). Those missing data on smoking (0.8%) were coded as non-smokers. Participants who had left the entire diet section of the FFQ blank were considered non-responders and excluded (0.9%). Median imputation was used to replace missing values for single food groups (6.5%), physical activity (2.8%), and BMI (2.6%).
The distribution of demographic variables across categories of exposure was tested using the chi-square test for categorical and the Kruskal–Wallis test for continuous variables. We used the Kaplan–Meier (K-M) method to assess median RFS and OS in each group and Cox proportional hazards model analysis to estimate univariate and multivariable-adjusted hazard ratios (HRs) and 95% confidence intervals (CIs) for CRC recurrence and all-cause death. The survival analysis was right censored. The least healthy group served as the reference category.
The pre-defined confounders age, sex, and educational level were included in the multivariable model (Figure S1 ). Tumor stage, oncological treatment, and tumor site were considered potential mediators of the effect of a healthy lifestyle on RFS (Figure S1 ). Diabetes and cardiovascular disease (CVD) were considered potential mediators of the effect of a healthy lifestyle on OS (Figure S2 ).
Using the Wald test, we tested for interactions between the HL score and tumor site, oncological treatment, and tumor stage. Since most participants with rectal cancer had received neoadjuvant radiotherapy (RT) before surgery, that is before the start of observation time, we analyzed rectal cancer patients separately using an RT variable as a covariate in the multivariate regression model. A complete cases-only analysis was conducted excluding those missing data on any of the HL score variables.
All analyses were done using SPSS version 28. | Results
Participants missing an exact date of recurrence ( n = 29), with recurrences occurring ≤ 6 months of diagnosis ( n = 18) or follow-up time ≤ 6 months ( n = 11) were excluded from the RFS analysis ( n =58 ). A total of 1040 participants were included in the RFS analysis and all 1098 participants were included in the OS analysis (Fig. 1 ).
Demographic characteristics of participants are shown for the total population and by HL score category in Table 1 .
The group with the healthiest lifestyle (HL 4) consisted of 157 participants (14%). These were more likely to be women, of higher age, with a higher educational level, and less often diabetics, as compared to the 233 participants (21%) in the least healthy group (HL 0–1) who were predominantly male and tended to be younger at CRC onset. There were no differences in cancer stage, tumor site, oncological treatment, or other clinical factors. The composition of the HL score is further outlined in Table 2 .
We observed 221 events of cancer recurrence among 1040 participants during a median follow-up time of 4.3 years (Fig. 2 ).
A healthy lifestyle was associated with improved RFS. The crude and adjusted HRs of recurrence and death were significantly lower than the reference for all score categories above the reference. Compared to participants with an HL 0–1 (least healthy), the HL 4 (most healthy) category had a crude HR for recurrence of 0.51 (95% CI 0.32–0.81) and an adjusted HR for recurrence of 0.51 (95% CI 0.31–0.83), with sex, age, and educational level included in the multivariate model. The adjusted HRs for recurrence of participants with an HL 2 and HL 3 were 0.57 (95% CI 0.40–0.81) and 0.66 (0.47–0.92), respectively (Table 3 ). There were 542 deaths among 1098 participants during a median follow-up time of 6.3 years (Fig. 2 ). The crude HR for all-cause death for participants with HL 4 vs HL 1 was 0.65 (95% CI 0.48–0.87) and the adjusted HR for all-cause death was 0.52 (95% CI 0.38–0.70) (Table 3 ). The adjusted HRs for death of HL 2 and 3 vs HL 1 were 0.66 (0.50–0.79) and 0.72 (0.57–0.90). We found no significant interactions between the covariates included in the multivariate model when using the Wald test.
In the K-M curves for recurrence (Fig. 2 ), the curve of the least healthy group was found to have a disproportional course in relation to the others with events occurring sooner in this group, indicating changes in HR over time. The proportional hazards assumption was not valid (Log-Rank test p -value = 0.001). The HRs for recurrence are thus to be interpreted as average estimates for the whole time of observation.
Sensitivity analysis
The violation of the proportional hazards assumption prompted us to conduct a Cox proportional hazards analysis with time-dependent covariates, in order to estimate the HRs for time periods < 24 months, ≥ 24 months – < 36 months, ≥ 36 months – < 48 months, and ≥ 48 months – < 60 months. The strongest effect of the HL score on RFS was seen in the interval of ≥ 24 months – < 36 months (Table S1 ).
We found no significant interaction effects between the HL score and cancer stage ( p -value: 0.39), tumor location ( p -value 0.68), and oncological treatment ( p -value:0.66) when using the Wald test in the RFS analysis, A complete cases-only analysis was performed, excluding all cases with missing values in score components. This only slightly affected estimated HRs and 95% CIs (Table S2 ).
We analyzed the rectal cancer group separately, including a covariate for radiotherapy in the Cox model. Participants had received a total dose of either 25 Gy (80% of those treated with neoadjuvant RT) or 50,4 Gy. We coded a categorical variable with three levels (0 = no RT, 1 = 25 Gy, 2 = 50.4 Gy), and tested it in a Cox model for rectal cases only. The p-value of the RT-covariate was non-significant.
When including the individual score components as covariates in a multivariate Cox regression analysis, only smoking, and exercise were significantly associated with a reduced HR of CRC recurrence and death (Table S3 ). Sex, age, and level of education were all significantly associated with OS, but not RFS. | Discussion
In this cohort study, patients with CRC stage I-III and a healthy lifestyle (HL 4) had a 49% lower HR of cancer recurrence and a 48% lower HR of all-cause death compared with the least healthy (HL 1). As we found the proportional hazards assumption to be violated in our survival analyses, indicating changes in hazard rate over time, we investigated the time-varying effects of the score on RFS. The results indicate a stronger effect in the interval of 24–36 months. However, only a small number of recurrences occurred after this period and the results for survival > 36 months should thus be interpreted with caution.
This is one of the first studies to report a statistically significant decrease in the risk of CRC recurrence in patients adhering to a healthy lifestyle pre-diagnosis. A small number of previous studies have reported inverse associations between a healthy lifestyle and overall mortality, but results for RFS or CRC-specific mortality have often been weaker.
Among 5727 all-stage CRC cases, Pelser et al. reported a statistically significant reduction in the risk of all-cause death for those with a pre-diagnostic healthy lifestyle (including a BMI within the normal range, smoking avoidance, physical activity, a healthy diet, and a low intake of alcohol) [ 20 ]. Reduced risk of CRC-specific death was seen only among rectal cancer cases, whereas our results indicate a protective effect of a healthy lifestyle irrespective of the anatomical subsite.
In a cohort of CRC patients from the Nurses’ Health Study (NHS) and the Health Professionals Follow-up Study (HPFS), pre-and post-diagnostic adherence to the World Cancer Research Fund/American Institute for Cancer Research (WCRF/AICR) score was significantly associated with a lower risk of all-cause, but not CRC-specific, death [ 21 ]. This score includes physical activity, diet, and body weight, but not smoking. Non-smoking was our study’s strongest individual risk-reducing factor, which may account for the differences. Among 3292 cases of all-stage CRC within the European Prospective Investigation into Cancer and Nutrition (EPIC) study, pre-diagnostic concordance with the WCRF/AICR recommendations was however associated with reduced CRC-related and overall mortality [ 22 ].
Among 992 colon cancer stage III cases, finally, a healthy lifestyle post-diagnosis (including normal body weight maintenance, physical activity, and a healthy diet) according to the guidelines issued by the American Cancer Society (ACS) was associated with a significant improvement in OS and a significant trend toward improved RFS over a 7-year median follow-up time [ 23 ]. Stratifying for tumor stage did not indicate stage-specific associations between lifestyle and survival in our study.
A recently published large study on the associations between healthy lifestyles and cancer morbidity and mortality in diabetics, including 1904 participants with CRC, found a 45% lower risk of cancer mortality among those with the healthiest lifestyle, compared to the least healthy [ 42 ]. Lifestyle and dietary factors have also been included in recurrence and survival prediction models for colon cancer stage III, resulting in significantly improved predictions [ 43 ].
The suggested biological mechanisms conferring a protective effect of a healthy lifestyle on CRC risk include decreases in inflammation and oxidative stress, modulation of gut microbiota, decreased bowel transit time, and increases in insulin sensitivity [ 44 – 46 ]. The same mechanisms may be involved in reducing the risk of recurrence. Traditional models of tumorigenesis have considered systemic tumor spread to be a late event in the process of primary tumor progression. This is being challenged by studies showing that dissemination can occur also in the early stages of this process [ 47 , 48 ] even in preneoplastic lesions [ 49 ]. Environmental exposures during the process of tumorigenesis could thus influence the risk of dissemination.
We’ve used a diet score modified to suit the intakes of a Swedish population, which may impair the generalizability of our results. However, the other lifestyle variables were assessed using internationally established criteria and the results may thus apply to other high-income or even transitioning populations. Our results indicate that pre-diagnosis lifestyle has an impact not only on CRC risk but also on disease-specific survival, which underlines the importance of primary preventive measures. Further studies are warranted to confirm our results.
Strengths and limitations
Our study has several strengths including a long follow-up time with many observed events, detailed clinical data, and a design that may have reduced the risk of reverse causation in lifestyle assessment. The study is based on high-quality questionnaires, and the method has been evaluated previously. Further, the proportion of questionnaire responders was high (93%) decreasing the risk of selection bias and missing data was scarce, increasing the internal validity. There are also weaknesses to consider, including the risk of misclassification bias in self-reported data. Unmeasured lifestyle changes post-diagnosis may be reflected in our results. Studies on lifestyle changes in CRC survivors report conflicting results, with some finding shifts towards more healthy dietary habits [ 50 ], and smoking cessation [ 51 ], while others report little or no change in lifestyle [ 52 ].
Our exposure variable, the HL score, is based on four dichotomized lifestyle factors, each given equal weight within the score. Our results however indicate that non-smoking and physical activity have a stronger association with an improved recurrence-free and overall survival than a healthy diet and BMI within the normal range. Future studies could thus consider using a weighted score. We chose to dichotomize the BMI variable, placing the underweight participants in the same “unhealthy” category as the overweight and obese. The impact of underweight, overweight, and obesity on CRC recurrence and survival may however differ, which should be considered in future studies. Using additional anthropometric markers may further improve body weight assessment [ 6 ]. Confounding due to additional unmeasured factors cannot be ruled out. | Conclusions
Our study indicates that adherence to a healthy lifestyle may increase the RFS and OS of patients with stage I-III CRC. Avoidance of smoking and being physically active were independent risk-reducing factors for these outcomes. | Purpose
Colorectal cancer (CRC) risk is associated with modifiable lifestyle factors including smoking, physical inactivity, Western diet, and excess body weight. The impact of lifestyle factors on survival is less known. A cohort study was conducted to investigate the combined effects of a healthy lifestyle and body mass index on prognosis following CRC diagnosis.
Methods
Treatment and follow-up data were collected from the patient files of 1098 participants from the Colorectal cancer low-risk study cohort including stage I-III CRC patients. A healthy lifestyle and BMI (HL) score was computed using self-reported data on smoking status, physical activity, adherence to a Mediterranean diet pattern, and BMI, and divided into four categories ranging from least to most healthy. Survival analyses were performed to assess recurrence-free survival and overall survival across categories of exposure, using the Kaplan–Meier method and Cox proportional hazards models adjusted for age, sex, and educational level.
Results
Among 1098 participants with stage I-III CRC, 233 (21.2%) had an HL score of 0–1 (least healthy), 354 (32.2%) HL score of 2, 357 (32.5%) HL score of 3 and 154 (14.0) HL score 4 (most healthy). Patients with the healthiest lifestyle (HL score 4) compared to the least healthy (HL score 0–1) had an improved recurrence-free survival (HL 4 vs HL 0–1, HRadj 0.51 (95% CI 0.31–0.83) and overall survival (HL 4 vs HL 0–1, HRadj 0.52 (95% CI 0.38–0.70).
Conclusion
Adherence to a healthy lifestyle may increase the recurrence-free and overall survival of patients with stage I–III CRC.
Supplementary Information
The online version contains supplementary material available at 10.1007/s10552-023-01802-y.
Keywords
Open access funding provided by Karolinska Institute. | Supplementary Information
Below is the link to the electronic supplementary material. | The authors would like to thank all study participants, clinicians, and staff of the Swedish low-risk colorectal cancer study group. The late Berith Wejderot stands out for her diligent work with study inclusion and the handling of questionnaires. Professor Alicja Wolk at the Institute of Environmental Medicine KI contributed to the questionnaire design and Jan-Erik Frödin, MD and Ph.D. at the Department of Onclogy-Pathology KI contributed to the study design and helped plan the collection of data. We would also like to express our gratitude to biostatistician Mikael Andersson Franko at the Department of Clinical Science and Education at Karolinska Institutet for overseeing the data analysis and making valuable contributions to our discussions on the interpretation and generalizability of our results.
Author contributions
SB, PR, UL, SL, AL, and AL contributed to the study design and planning. Data collection was performed by SB and PR. Analyses were performed by SB. The manuscript was drafted by SB and PR and all authors reviewed and commented on previous versions of the manuscript. The final manuscript has been read and approved by all authors.
Funding
Open access funding provided by Karolinska Institute. This study was supported by Swedish Research Council, grant number 2019-01441, the Stockholm county council ALF project, grant number RS2020-0731, and the Swedish Cancer Society 211443Pj02H.
Data availability
The datasets analysed during the current study include sensitive and detailed data on the health and habits of study participants. They are not publicly available due to the risk of compromising their anonymity, but available from the corresponding author on reasonable request.
Declarations
Competing interests
The authors have no competing interests to disclose.
Ethics approval
This study was approved by the Regional ethical review board in Stockholm (Dnr 02-439, 04-4377, 2009/2155-32, 2013/928-32, 2014/1326-32, 2017/57-31/4).
Consent to participate
Informed content was obtained from all study participants upon inclusion. | CC BY | no | 2024-01-15 23:41:52 | Cancer Causes Control. 2024 Oct 2; 35(2):367-376 | oa_package/4f/aa/PMC10787671.tar.gz |
PMC10787673 | 37812268 | Introduction
Melittin, the major bioactive component (40–50%) [ 1 ] of honeybee ( A. melifera ) venom, is a hemolytic, small, linear peptide composed of 26 amino acid residues with the following sequence: GIGAVLKVLTTGLPALISWIKRKRQQ-NH2 [ 2 ]. It has a hydrophobic N-terminus and a hydrophilic C-terminus, forming channels on the plasma membrane [ 3 ]. Previous studies demonstrated that melittin has antibacterial [ 2 ], anti-inflammatory [ 4 ], anti-arthritis [ 5 ], anti-tumor [ 6 ], and neuroprotective properties [ 7 – 9 ]. The neuroprotective effects against neurological disorders mainly included enhancing motor performance [ 10 ] and protecting neurons, inhibiting oxidative stress and alleviating memory impairments [ 7 ], decreasing neuroinflammation [ 9 ], anticonvulsant potential [ 11 ], etc. Few studies have examined the potential application and benefit of melittin for stroke.
Stroke remains the leading cause of death and the most common cause of permanent disability worldwide, with ischemic stroke accounting for approximately 85% of all cases [ 12 ]. A key target of current ischemic stroke studies is inflammatory mechanisms that initiate within minutes after acute cerebral ischemia and persist for a long duration. Therefore, intervention against inflammation may be a prospective therapeutic target [ 13 ]. Neuroinflammation induced by neurocyte death after stroke triggers a cascade of events, including the overactivation of multiple cytokines and pathways [ 14 ], ultimately causing secondary injury and aggravating brain damage, and is integral to the pathophysiology of ischemic stroke [ 15 ]. In this process, nuclear factor-κB (NF-κB) and its pathways can be activated by oxidative stress [ 16 ], cerebral ischemia, and hypoxia, triggering a myriad of pro-inflammatory responses in microglia after brain ischemia, including upregulation of inflammasome components, such as tumor necrosis factor-alpha (TNF)-α, interleukin (IL)-6, IL-1β, which can further increase inflammatory damage [ 17 ].
Meanwhile, molecules with anti-inflammatory functions, such as tumor growth factor beta (TGF-β), IL-10, and IL-4, can counteract the effects of the aforementioned pro-inflammatory cytokines [ 15 ]. Monocyte chemotactic protein-induced protein 1 (MCPIP1, also known as ZC3H12A) is a recently identified zinc-finger protein that is a negative regulator of the inflammatory response [ 18 ]. Specifically, MCPIP1 inhibits MCP-1, IL-1β, IL-6, and TNF-α production by inhibiting the c-Jun N-terminal kinase and NF-κB signaling pathways [ 19 ]. Recently, researchers found that some agents could mediate neuroprotection during cerebral ischemia via MCPIP1 [ 20 , 21 ], such as tetramethylpyrazine [ 22 , 23 ], lipopolysaccharide (LPS) [ 24 ], and minocycline [ 25 ]. Additionally, studies showed that MCPIP1 was involved in electroacupuncture pretreatment-induced delayed brain ischemia tolerance [ 26 ].
The present study has demonstrated that melittin can inhibit the expression of inflammation markers (IL-1β, IL-6, TNF-α, interferon-gamma, and MCP-1) in the heart induced by coxsackievirus B3 [ 27 ]. Additionally, melittin administration effectively corrected the heightened levels of TNF-α and IL-6 while simultaneously suppressing upstream signaling molecules such as Toll-like receptor 4 (TLR4), p38 mitogen-activated protein kinase, and NF-κB in an acetic acid-induced colitis model [ 28 ]. Based on these findings, we hypothesize that melittin may also exert neuroprotective effects against cerebral ischemic conditions.
Recent In vivo and In vitro investigations have revealed that melittin conveys neuroprotective and organ-protective influences in neurodegenerative diseases, acting through anti-apoptotic and anti-inflammatory pathways by hindering the NF-κB signaling pathway [ 11 , 28 ]. Furthermore, melittin has been observed to manifest anti-inflammatory properties in BV-2 microglia by diminishing levels of nitric oxide and inducible nitric oxide synthase, thus obstructing LPS-induced NF-κB activation [ 29 ]. Consequently, in this study, we evaluated whether melittin administration could offer protection against focal cerebral ischemia in an animal model employing distal middle cerebral artery occlusion (dMCAO). We also scrutinized the repression of melittin-induced regulation of microglial inflammasome activation within LPS-stimulated BV-2 cells. Mechanistically, we investigated the impact of melittin on NF-κB pathway inhibition and MCPIP1 upregulation. This novel insight into the MCPIP1-induced anti-inflammatory activity implies that melittin might be a pioneering therapeutic agent for ischemic stroke and potentially for other neuroinflammatory diseases. | Materials and Methods
Animals and Melittin Administration
Male, specific pathogen-free, C57/BL6 mice (20–25 g) were supplied by Vital River (Beijing Vital River Laboratory Animal Technology, Beijing, China). In all experiments, 8- to 12-week-old mice were used, housed in a controlled animal facility in Hebei Key Laboratory of Vascular Homeostasis, Second Hospital of Hebei Medical University. All mice were supplied with water and balanced nutritional rodent chow and housed in controlled conditions with a 12-h light/dark cycle and humidity of 60 ± 5% at 22 ± 3 °C. The experimental procedures were approved by the Experimental Animal Ethics Committee of Hebei Medical University (Shijiazhuang, China, Permit No. HMUSHC-130318). All studies were performed per the Guide for the Care and Use of Laboratory Animals (8th Edition) and the ARRIVE guidelines.
Melittin (purity 98%; measured using high-performance liquid chromatography; Xian Lintai Bioscience & Technology Co. Ltd., China) was diluted with 0.9% saline and administered intraperitoneally. In preliminary experiments, the median lethal dose (LD50) of melittin was less than 100 μg/g. Mice were randomly divided into the following groups: MEL groups: mice treated with a dose of 0.1 μg/g (MEL-L), 0.2 μg/g (MEL-M), or 0.4 μg/g (MEL-H) melittin 24 h before ischemia and once a day after surgery until sacrificed. Mice in the sham and Vehicle groups were intraperitoneally injected with an equal volume of 0.9% saline at the corresponding time points.
Mouse Focal Brain Ischemia Model
Focal cerebral cortical ischemia was established by permanent occlusion of the unilateral middle cerebral artery (MCA) and common carotid artery (CCA), as described previously [ 30 ]. Weighed animals were anesthetized with an intraperitoneal injection of avertin (400 mg/kg, Cat# T48402-25G, Sigma-Aldrich, USA). The body temperature was monitored and maintained at 37.5 ± 0.5 °C. For dMCAO (Vehicle group), a median neck incision (approximately 1 cm) was performed, and the right CCA was isolated, exposed, and permanently ligated with a surgical suture. A skin incision was made between the right eye and the external auditory canal. Then the cortical branch of the right MCA was exposed by drilling a small hole, approximately 2 mm in diameter, through the skull. The MCA was then coagulated with a cauterizer (Bovie, USA) under a microscope to avoid damaging the brain surface. Sham-operated control mice underwent the same procedure except for CCA occlusion and distal MCA coagulation.
Neurological Function Assessment
The rotarod test was performed to evaluate the motor coordination and learning function of ischemic mice per the procedures described by Hayashi-Takagi et al. [ 31 ]. The modified neurological severity score (mNSS) was determined to assess neurological function, including motor, sensory, reflex, and balance abilities, learning, and limb coordination skills, following the criteria reported by Gao et al. [ 32 ]. The mNSS scores were recorded at 24 h, 48 h, and 72 h after dMCAO, and grading was performed using a modified scale ranging from 0 to 18 (normal score, 0; maximal deficit score, 18). Higher scores suggested more severe neurological impairment. Mice that could remain on a fixed (4 rpm) rotating rod for at least 60 s were selected for the rotarod test and divided into four groups: Vehicle (dMCAO), MEL-L, MEL-M, and MEL-H (exact dosages as described above). After training for five days, the animals were placed on the rod with an accelerating speed from 4 to 40 rpm in 4 min for three trials at each time point, and then the results were averaged. Twelve male mice were used in each group.
Brain Infarction and Water Content Measurement
The brains were stained with 2,3,5-triphenyltetrazolium chloride (TTC) to evaluate infarct volume at 24 h after dMCAO, as described previously. Mice were euthanized, and the brains were removed and frozen for 20 min. The frozen brain tissue was sectioned coronally at a thickness of 1 mm and incubated in 2% TTC at 37 °C for 15 min, followed by fixation in 4% paraformaldehyde for 24 h. TTC reacted with dehydrogenase in normal tissue and stained red, and ischemic tissue appeared pale because of its low dehydrogenase activity. The infarct volume was quantified using image analysis software (Image-Pro Plus 5.1; Media Cybernetics, Inc., Bethesda, MD, USA) and expressed as a percentage of the contralateral hemisphere. To evaluate brain edema after cerebral ischemia, all animals were anesthetized with 4% isoflurane, and brains were removed at 24 h, 48 h, and 72 h after dMCAO. Brain tissues were weighed before (wet weight) and after (dry weight) drying at 95 °C for 24 h. We used the following formula to calculate the brain water content (%): (wet weight − dry weight)/wet weight × 100. Six mice were used in each group.
Cerebral Blood Flow (CBF) Assessment
CBF was monitored in real-time using a laser speckle contrast imager (PeriCam PSI System, Perimed, Sweden). Anesthetized mice with the skull exposed were fixed on a stereotactic apparatus while undergoing scans. The images were used to calculate the average perfusion level in infarcted areas and assess CBF fluctuations in both hemispheres. The body temperature of mice was kept at 37 ± 0.2 °C during the operation.
Cell Culture and Transfections
The murine BV-2 microglia cell line obtained from the National Collection of Authenticated Cell Cultures (Shanghai, China) was cultured in DMEM (Gibco, USA) supplemented with 10% (v/v) fetal bovine serum (Gibco) and 1% penicillin/streptomycin at 37 °C in a humidified atmosphere containing 5% CO 2 . Cells were passed when they were approximately 80% confluent. According to the experimental conditions, cells were inoculated in a cell culture dish. After the cells adhered to the wall, a follow-up study was performed.
BV-2 cells were pretreated with 1 μg/ml LPS for 24 h for the cellular inflammation model. Because the safe concentration of melittin in BV-2 cells was less than 4 μg/ml, cells were pretreated with three doses of melittin (Diluted with culture medium) for 1 h, 0.5 μg/ml (MEL-L), 1 μg/ml (MEL-M), or 2 μg/ml (MEL-H), before treatment with LPS. BV-2 cells in the control group were not pretreated with melittin.
Transfections with fluorescent MCPIP1 siRNA (si-ZC3H12A, GenePharma, Shanghai, China) and control siRNA (si-Control, GenePharma) were performed using Lipofectamine RNAiMAX strictly according to the manufacturer’s instructions. The siRNA sequences used in the experiment were as follows: si-ZC3H12A: 5′-CCUGGACAACUUCCUUCGUAAGAAA-3′; si-Control: 5′-UUUCUUACGAAGGAAGUUGUCCAGG-3′. melittin (2 μg/ml) was added to the culture medium 6 h after siRNA transfection. After 1 h of melittin pretreatment, the cell culture medium was replaced with a medium containing both LPS and melittin, and the cells were incubated for 24 h in a 5% CO 2 incubator at 37 °C for analysis. Cells transfected with si-Control were used as controls.
Cell Viability Assay
The cell viability rate was determined using a Cell Counting Kit-8 (CCK-8, Dojindo, Japan). The cells were seeded in 96-well plates at a density of 2 × 10 5 /ml and incubated for 24 h before experimental treatments. At each time point, processing was ended, and 10 μl of CCK-8 was placed in each well. After incubation at 37 °C for 2 h, the absorbance at 450 nm was measured using a microplate reader (TECAN, Swiss). The cell viability of the test groups was expressed as the percentage of viable cells normalized to that of the control group. Six replicated wells were set up in each group. The results represent three independent experiments.
Real-Time Polymerase Chain Reaction (RT-PCR)
Briefly, total RNA was extracted from ischemic brain tissues and BV-2 cells using TRIzol (Invitrogen, USA), isolated using a Total RNA Purification kit (Nanohelix, Daejeon, Korea) following the manufacturer’s instructions, and then reverse-transcribed to cDNA using a synthetic first chain cDNA toolkit (Fermentas International Inc.). Quantitative RT-PCR (qRT-PCR) was performed using a fluorescent dye with a LightCycler480 PCR instrument (SYBR Green I; Cwbio). Forty cycles were conducted as follows: 95 °C for 10 s, 60 °C for 20 s, and 72 °C for 20 s. The mRNA level was normalized to that of mouse GAPDH and expressed as the fold change. The mouse-specific primers (Sango Biotech, Shanghai, China) were as follows: IL-1β: F: ACTGTTTCTAATGCCTTCCC; R: TGGTTTCTTGTGACCCTGA, IL-6: F: TCCAGTTGCCTTCTTGGGAC; R: GTGTAATTGCCTCCGACTTG, TNF-α: F: CCAGTGTGGGAAGCTGTCTT; R: AAGCAAAAGAGGAGGCAACA, NF-κB: F: GGCTGTATTCCCCTCCATCG; R: CCAGTTGGTAACAATGCCATGT, MCPIP-1: F: CAATGTGGCCATGAGCCAT; R: AGTTCCCGAAGGATGTGCTG, MM-GAPDH: F: GGTTGTCTCCTGCGACTTCA; R: TGGTCCAGGGTTTCTTACTCC. In vivo, we obtained tissue samples from the brain at 24 h, 48 h, and 72 h after dMCAO. Six mice were used in each group at each time point. In vitro, six replicated wells were set up in each group.
Enzyme-Linked Immunosorbent Assay (ELISA)
Brain homogenates were prepared using a tissue homogenizer, and the supernatant was collected for detection after centrifugation. For In vitro experiments, the cell culture supernatant after centrifugation was used for further studies. The Mouse IL-6 ELISA Kit (JEM-05, Anhui Joyee Biotechnics, China), Mouse IL-1β ELISA Kit (JEM-01, Anhui Joyee Biotechnics), and Mouse TNF-α ELISA kit (JEM-12, Anhui Joyee Biotechnics) were used to measure the protein concentration of tissue and cell IL-6, IL-1β, and TNF-α according to the manufacturer’s instructions. A microplate reader (Infinite M200 PRO, Tecan, Switzerland) was used for the analysis. Observation time points, grouping, and sample size are the same as above.
Western Blotting
Total protein was extracted from brain tissues using a Total Protein Extraction Kit (Applygen Technologies Inc. Beijing, China) and a Nuclear Protein Extraction Kit (CWBIO, Beijing, China). Lysis buffer was prepared containing phenylmethylsulfonyl fluoride (Applygen Technologies Inc. Beijing, China), a protease inhibitor (Sigma, USA), and cell RIPA buffer (Solarbio, Beijing, China) with a ratio of 1:1:100. Adherent and centrifuged cells were separated, mixed with prepared lysis buffer, and lysed on ice for 30 min. After centrifugation at 4 °C and 12,000 × g for 20 min, the supernatant was assessed using a protein concentration assay. Nucleoprotein from brain tissues and cells was extracted using a Nuclear Protein Extraction Kit (CWBIO, Beijing, China) in strict accordance with the instructions. Protein concentrations were determined using a bicinchoninic acid protein assay reagent kit (Thermo Scientific, USA). Proteins (50 μg) were separated on a 10% sodium dodecyl sulfate–polyacrylamide gel and transferred onto polyvinylidene difluoride membranes (Roche, USA) in a transfer buffer containing 0.1% sodium dodecyl sulfate. The membranes were blocked with 5% skimmed milk for 1 h and incubated with primary antibodies consisting of mouse anti-β-actin (1:15,000, GeneTex, USA), mouse anti-MCPIP1 (1:1000, Abcam, US), rabbit anti-TLR4 (1:500, SAB), mouse anti-P84 (1:1000, GeneTex), and rabbit anti-NF-κB (1:500, Cell Signaling, USA) in blocking buffer overnight at 4 °C with gentle shaking. After three rinses (10 min each) with TPBS (phosphate-buffered saline (PBS) and 0.1% Tween-20), the membranes were incubated with secondary fluorescent antibodies (goat anti-rabbit or goat anti-mouse, 1:10,000; Rockland) at 37 °C for 1 h and then washed three times with TPBS (10 min each). An Odyssey infrared imaging system (LI-COR Bioscience) was used to scan and measure the relative density of target bands. The ratios of the protein bands of interest and the loading control (β–actin for total and cytoplasmic protein, P84 for nuclear protein) were calculated using Image-Pro Plus 5.1, and the data were normalized to those of the sham condition.
Cellular Immunofluorescence Staining
The immunofluorescence technique was performed as described previously [ 24 ]. Cells were washed with PBS and fixed in fresh 4% paraformaldehyde solution for 30 min at room temperature. The cells were then washed three times with PBS, incubated for 30 min in a blocking solution with 10% donkey serum at room temperature, and then incubated with a specific primary monoclonal rabbit anti-NF-κB antibody (1:500, Cell Signaling, USA) diluted in blocking buffer (1:400) overnight at 4 °C in a humidified chamber. On the second day, chamber slides were washed three times with PBS and incubated for 1 h with the appropriate corresponding secondary antibody (Alexa Fluor 488 or 594, 1:800, Jackson Immuno Research) diluted in blocking buffer (1:500) at 20–37 °C. The cells were washed three times with PBS, incubated for 5 min with Hoechst (1:100) for nuclei staining at room temperature while protected from light, and mounted with Vectashield medium. Color images were acquired with a laser scanning confocal microscope (Zeiss LSM880, Germany), and 200 cells from each experiment were counted using ImageJ software.
Statistical Analysis
All data are presented as the mean ± standard error of the mean. Pairwise comparisons between groups were analyzed using a T-test, and multiple comparisons were evaluated by one-way ANOVA followed by the least significant difference test. For all analyses, P < 0.05 was considered statistically significant. | Results
Melittin Ameliorates Neurological Deficits and Decreased CBF in dMCAO Mice
To assess the effects of melittin on neurological deficits following cerebral ischemia, dMCAO mice were pretreated with varying concentrations of melittin (0.1, 0.2, and 0.4 μg/g for the MEL-L, MEL-M, and MEL-H groups, respectively). The mNSS and rotarod test results were recorded at 24 h, 48 h, and 72 h post-operation intervals. The neurological scores in the MEL-M and MEL-H groups were notably lower than those in the vehicle group at both 48 and 72 h after dMCAO ( P < 0.05) (Fig. 1 a). Furthermore, the motor function test revealed significant recovery in neurologic impairment in the MEL-H group across all observation points, whereas no discernible difference in neurological deficits was found between the MEL-L group and the vehicle group (Fig. 1 b). From these observations, we identified the effective therapeutic concentrations of melittin to be 0.2 μg/g and 0.4 μg/g. We consequently chose the medium dose (0.4 μg/g) for melittin administration in the subsequent experimental phase.
Bilateral CBF was carefully monitored using a laser speckle apparatus at specific time intervals: before the stroke and immediately, 6 h, 12 h, 24 h, 48 h, and 72 h afterward. The results indicated that CBF on the lesioned side sharply decreased after dMCAO. However, perfusion in the ischemic cortex of melittin-pretreated mice gradually increased from 12 to 72 h after stroke, showing a marked improvement compared to the vehicle group (Fig. 1 c, d, f). CBF on the contralateral side in both vehicle and MEL groups began to recover slowly at 6 h post-stroke. However, no significant differences were observed between these two groups (Fig. 1 e). These findings affirm that melittin enhances both motor deficits and CBF within the ischemic cortex in the dMCAO mouse model.
Melittin Reduces Infarct Volume and Brain Edema in Ischemic Brain Injury
After dMCAO, the brain infarct size was assessed using TTC staining, which showed that the infarct size in melittin-pretreated brain tissue was reduced compared with that in the vehicle group at 24 h ( P < 0.01) and it is dose-dependent (Fig. 1 g). Brain edema is one of the earliest pathological processes after ischemic neuronal damage, and it significantly increases as early as 20 to 45 min after dMCAO and further increases over 72 h. Our results showed that melittin decreased the percentage of brain water content in the ipsilateral hemisphere after stroke at 24 h, 48 h, and 72 h compared with that in the vehicle group (P < 0.05) (Fig. 1 h). These results indicated a potential effect of melittin in alleviating infarct volume and encephaledema after stroke In vivo.
Melittin Inhibits Pro-inflammatory Factors and Induces MCPIP1 Expression in the Ischemic Brain
We examined the expression of pro-inflammatory cytokine transcripts in ischemic mouse brains after MCAO. The mRNA and protein levels of IL-1β, IL-6, and TNF-α in the ischemic brain were assessed by qRT-PCR and ELISA, respectively. Cerebral ischemia resulted in significantly increased levels of IL-1β, IL-6, and TNF-α compared with those after sham treatment. These increases were inhibited by melittin treatment in a dosage- and time-dependent manner (Fig. 2 a, b). Because activation of the NF-κB signaling pathway is involved in producing inflammatory factors, we determined whether melittin pretreatment can affect this process. Compared with the vehicle group, NF-κB mRNA was significantly decreased 72 h after stroke in the MEL-M and MEL-H groups (Fig. 3 a). The western blot results were consistent with the PCR results (Fig. 3 b).
According to previous studies, MCPIP1 plays a significant anti-inflammatory role by inhibiting the generation of major pro-inflammatory cytokines [ 27 ]. Consistent with the previous findings, we found that the MCPIP1 mRNA level in ischemic brain tissue was slightly increased after dMCAO compared with that on the contralateral side; the level peaked at 48 h and began to decline before 72 h ( P < 0.01) (Fig. 3 c). Nevertheless, the qRT-PCR results indicated that MCPIP1 mRNA expression in the MEL-H group was steadily upregulated over 72 h (Fig. 3 c). Additionally, the western blot results consistently showed that the protein level of MCPIP1 in ischemic brain tissue was significantly elevated by melittin treatment and was maintained at a high level 24–72 h after dMCAO (Fig. 3 d). These findings indicate the potential of melittin to alleviate the neuroinflammatory injury induced by cerebral ischemia via inhibiting NF-κB and upregulating MCPIP1.
Melittin Reduces LPS-Induced Cell Death and Inflammatory Cytokine Activation in BV-2 Cells
We examined the protective effects of melittin against LPS-induced cell death in BV-2 cells. BV-2 cells were stimulated with LPS (1 μg/ml), incubated for 24 h, and pretreated with three doses of melittin for 1 h. Cell viability assays revealed that LPS stimulation of BV-2 cells resulted in decreased cell viability, and melittin pretreatment (1 μg/mL, 2 μg/mL) increased survival after LPS-induced inflammatory injury (Fig. 4 a). To further clarify the mechanism of action, qRT-PCR and ELISA were performed to detect cytokine levels in LPS-induced BV-2 cells with or without melittin treatment. The results showed that melittin could apparently reduce IL-1β, IL-6, and TNF-α expression at the gene and protein levels with dose dependence in LPS-induced BV-2 cells (Fig. 4 b, c). Combined with the results In vivo, the results In vitro indicated that the anti-inflammatory bioactivity of melittin participates in its neuroprotective effect against cerebral ischemia and neuroinflammatory injury.
Melittin Suppresses Activation of NF-κB and Upregulates MCPIP1 Expression In vivo
NF-κB plays a pivotal role in innate immune responses. Therefore, we investigated the effect of melittin on signaling molecules in LPS-induced BV-2 cells. The qRT-PCR and western blot results showed that melittin administration significantly ameliorated NF-κB expression at the mRNA and protein levels in BV-2 cells compared with those in the LPS group (Fig. 5 a).
Moreover, the cellular immunofluorescence results showed that BV-2 cells were double-stained with anti-NF-κB (red) antibodies and Hoechst (blue), and the merged image indicated nuclear translocation. In the control group, most NF-κB was located in the cytoplasm. After LPS stimulation, NF-κB started to translocate into the nucleus. However, melittin treatment significantly inhibited LPS-induced nuclear translocation of NF-κB compared with the LPS group (Fig. 5 b).
An investigation of anti-inflammatory factors showed that MCPIP1 mRNA increased at 3 h, peaked at 6 h, and decreased at 12 h due to LPS treatment compared with the control group (Fig. 5 c). In LPS-stimulated BV-2 cells, the mRNA and protein levels of MCPIP1 were elevated by melittin pretreatment in a dose-dependent manner (Fig. 5 c).
Melittin Treatment-Induced Tolerance of Inflammatory Injury Decreases Because of MCPIP1 Deficiency in LPS-Induced BV-2 Cells
Previous studies have shown that MCPIP1 may be a modulator that critically controls inflammation and immunity and alleviates inflammation by selectively suppressing the NF-κB pro-inflammatory signaling pathway [ 28 ]. Our previous results indicated that melittin treatment could increase MCPIP1 expression and reduce the NF-κB level. Consequently, a further study was conducted to examine whether MCPIP1 is involved in melittin treatment-induced neuroprotection against Inflammatory injury induced by LPS in BV-2 cells. After successfully knocking down MCPIP1 expression with MCPIP1-specific siRNA (Fig. 6 a), BV-2 cells were treated with melittin and LPS, and the mRNA and protein expression levels of IL-1β, IL-6, and TNF-α were detected by qRT-PCR and ELISA. The results showed that MCPIP1 depletion significantly increased the expression of IL-1β, IL-6, and TNF-α induced by LPS in BV-2 cells pretreated with melittin compared with that of the si-Control group (Fig. 6 b). Furthermore, the absence of MCPIP1 caused a marked elevation in the mRNA and protein levels of NF-κB according to the PCR and western blot results (Fig. 6 c). Consistently, the laser confocal immunofluorescence microscopy results indicated that nuclear translocation of NF-κB caused by LPS-induced inflammatory injury was increased in the si-ZC3H12A group, even when both groups were pretreated with melittin (Fig. 6 d). These results revealed that the anti-inflammatory effect of melittin was weakened by MCPIP1 knockdown. These results provide direct evidence that the neuroprotective effects of melittin against LPS-induced inflammatory injury are mediated, at least in part, by MCPIP1. | Discussion
In traditional Chinese medicine, bee venom has long been used against chronic pain, skin diseases, arthritis, inflammation, and cancer [ 2 ]. As the bee venom peptide, melittin is hydrophobic and amphipathic, showing archetypal membrane activity [ 33 ]. It is toxic to both cells and tissues at a high enough concentration. Nevertheless, various exciting and potentially useful biological activities have been reported for melittin at low concentrations [ 3 , 5 ], conjugated to proteins to other molecules, or formulated in nanoparticles and liposomes [ 2 ]. Recent experimental studies have shown that melittin can reduce excessive immune responses and provide a new alternative for controlling inflammatory diseases, including skin inflammation, neuroinflammation [ 7 , 34 ], atherosclerosis, arthritis, and liver inflammation [ 35 ].
Melittin possesses neurophilic properties, with the toxic effect initially causing subcortical excitation and later inducing extensive inhibition in the cortex and subcortical structures [ 36 , 37 ]. Moreover, melittin has demonstrated a remarkable analgesic effect, traversing the blood–brain barrier (BBB) and affecting the central nervous system, thereby expanding the pain threshold and reducing pain sensitivity [ 2 , 38 ]. A recent study has shown that subtoxic concentrations of melittin can temporarily open the paracellular tight junctions of the BBB [ 2 ]. Another study has demonstrated that a 150-μL dose containing 3 μM of melittin significantly increases BBB permeability without causing significant toxicity or neurologic effects [ 39 ]. Additionally, the neuroprotective effect of melittin has been observed in Parkinson’s and Alzheimer’s disease after intraperitoneal injection administration [ 7 ]. As the pathophysiology of ischemic stroke includes encompassing inflammatory responses, oxidative stress, and cell death within the ischemic focal area [ 33 , 34 ], we speculate that melittin could alleviate cerebral ischemic injury by inhibiting the inflammatory response within ischemic brain tissue and cells. By electrocoagulation, we established an experimental model of cerebral infarction, dMCAO and carried out a series of ethological, morphological, and molecular biology experiments. The In vivo results indicated that medium and high doses of melittin could significantly improve the motor function of mice with focal cerebral ischemia, reduce edema in brain tissue, decrease cerebral infarction volume, and promote blood flow recovery in ischemic brain tissue. These findings suggest that melittin has neuroprotective potential for preventing brain tissue injury caused by ischemia.
Neuroinflammation activated within hours after brain ischemia is a prime target for developing new stroke therapies [ 15 , 40 , 41 ]. During cerebral ischemia, the expression of both pro-inflammatory, as TNF-α, IL-1β, IL-6, and anti-inflammatory cytokines, MCPIP1, rapidly increases throughout the brain tissue [ 42 , 43 ]. In the process, NF-κB, as an essential transcription factor, is activated by these cytokines (IL-1, IL-6, and TNF-α), regulates numerous genes, including TNF-α, IL-6, IL-1β, matrix metallopeptidase 9 [ 12 , 44 ]. This vicious cycle expands the initial inflammatory response [ 45 ] and increases the detrimental effects of cerebral ischemia [ 46 ]. Our qRT-PCR, ELISA, and western blot results showed that melittin reduced the ischemia-induced increases in IL-1β, IL-6, and TNF-α and inhibited NF-κB expression in the nucleus In vivo. The findings revealed that melittin exhibits anti-inflammatory effects against cerebral ischemia, thereby exerting neuroprotective effects.
To determine the pharmaceutical characterization and possible mechanism of action, we constructed a neuroinflammation model in BV-2 microglial cells stimulated by LPS [ 47 ]. A recent study by Ran et al. [ 48 ] has shown that in MCAO mice, microglia displayed enhanced nuclear translocation of NF-κB p65 after surgery, accompanied by elevated levels of TLR4 protein ( P < 0.001) and increased phosphorylation of IKBα and p65. MCPIP1 has emerged as a negative regulator of macrophage activation, which effectively inhibits the production of pro-inflammatory cytokines, including TNFα, IL-1β, IL-6, and MCP-1. Here, we found that the secretion of inflammatory cytokines, including IL-1β, IL-6, and TNF-α, could be significantly reduced, and MCPIP1 was elevated by melittin pretreatment. The mRNA and protein levels of NF-κB were reduced, and inflammatory injury-induced nuclear translocation of NF-κB was reversed. These results align with those of In vivo Tests.
MCPIP1 is an endogenous protein prominently expressed in the brain, primarily localized in neurons and microglia, which play crucial roles as primary sources of pro-inflammatory cytokines during ischemia. [ 24 , 49 ]. As a negative regulator of macrophage activation, MCPIP1 exerts significant anti-inflammatory effects by inhibiting the production of a primary group of pro-inflammatory cytokines [ 50 – 52 ], such as MCP-1, IL-1β, IL-6, and TNF-α, via inhibition of the c-Jun N-terminal kinase and NF-κB signaling pathways [ 53 – 55 ]. Previous evidence has revealed that MCPIP1 expression is induced in LPS-stimulated monocytes, macrophages, and endothelial cells and is involved in LPS preconditioning-induced ischemic brain tolerance. Jian Liang et al. confirmed that MCPIP1 is involved in LPS preconditioning-induced ischemic stroke tolerance via its anti-inflammatory activities [ 24 ]. Jin et al. found that MCPIP1 deletion results in increased infarct volume and inflammatory gene expression in mice with transient MCAO [ 22 ]. Recently, some medications, including minocycline [ 25 , 44 ], Tetramethylpyrazine [ 22 , 23 ], and Huoluo Xiaoling Pellet have also been found to mediate neuroprotection during cerebral ischemia via MCPIP1 [ 53 , 56 ]. In this study, we observed a significant and sustainable increase in MCPIP1 expression induced by melittin pretreatment both In vivo and In vitro. Furthermore, siRNA-mediated inhibition of MCPIP1 significantly increased the gene and protein expression levels of IL-1β, IL-6, and TNF-α in LPS-induced BV-2 cells treated with melittin and attenuated the melittin-induced neuroprotective effects. Western blot and laser confocal immunofluorescence microscopy results demonstrated that melittin suppresses the LPS-induced increase in NF-κB expression and nuclear localization, and this protective effect was weakened by MCPIP1 knockout in BV-2 cells. Our results also indirectly confirmed the significant inhibitory effect of MCPIP1 on NF-κB, consistent with previous studies’ results.
From these findings, we deduce that melittin can mitigate the injury resulting from ischemic stroke. This effect appears to be mediated, at least partially, by the inhibition of inflammatory cytokines and the NF-κB pathway through the upregulation of MCPIP1 in ischemic brain tissues and cells. Our study uncovers novel insights into the potential therapeutic application of melittin for treating cerebral ischemia.
In this research, we explored and substantiated the neuroprotective influence of melittin in animal models of cerebral ischemia and a cell model of neuroinflammation for the first time. However, certain limitations warrant further improvement. As compared with living cells extracted from brain tissues, immortal cell lines differ in biological characteristics and may not fully replicate the genuine In vivo environment. Moreover, considering that cell transfection may diminish cellular activities and influence the outcomes of subsequent drug treatment, additional studies employing animal experiments with gene knockout are required.
We also observed that the anti-inflammatory impact of melittin was diminished with MCPIP1 knockdown but not entirely nullified. This leads us to hypothesize that other factors or pathways might contribute to this mechanism. Another concern arises from the poor stability of peptide drugs In vivo and the inherent toxicity of melittin. Drug administration was confined to small doses of the melittin monomer, as the detrimental effects of larger quantities, such as hemolysis, have limited its clinical applicability. Future research must address these challenges and may include experimental refinements or alterations to the molecular structure of the medication to expand its potential therapeutic scope [ 57 , 58 ]. | Conclusion
Our findings furnish compelling evidence that underscores the advantageous role of melittin in the therapeutic intervention of ischemic stroke, with potential applications to other neuroinflammatory diseases as well. As for the underlying mechanism, our study illuminates that melittin combats inflammatory injury in ischemic brain tissues and ameliorates cell death in LPS-induced BV-2 cells. This effect is seemingly achieved through the dual action of reducing pro-inflammatory cytokines IL-1β, IL-6, and TNF-α, and augmenting MCPIP1, thereby attenuating the NF-κB pathway. We further discovered that the induction of MCPIP1 mediates the melittin-induced resilience to inflammatory damage and plays a vital role in the tolerance to brain ischemia induced by melittin pretreatment.
These insights position melittin as a promising candidate in the arsenal against neuroinflammatory disorders. However, the pathway to clinical application is intricate and necessitates further experimental and clinical investigation to substantiate its therapeutic potency and delineate its precise mechanisms of action. | Melittin, a principal constituent of honeybee venom, exhibits diverse biological effects, encompassing anti-inflammatory capabilities and neuroprotective actions against an array of neurological diseases. In this study, we probed the prospective protective influence of melittin on cerebral ischemia, focusing on its anti-inflammatory activity. Mechanistically, we explored whether monocyte chemotactic protein-induced protein 1 (MCPIP1, also known as ZC3H12A), a recently identified zinc-finger protein, played a role in melittin-mediated anti-inflammation and neuroprotection. Male C57/BL6 mice were subjected to distal middle cerebral artery occlusion to create a focal cerebral cortical ischemia model, with melittin administered intraperitoneally. We evaluated motor functions, brain infarct volume, cerebral blood flow, and inflammatory marker levels within brain tissue, employing quantitative real-time polymerase chain reaction, enzyme-linked immunosorbent assays, and western blotting. In vitro, an immortalized BV-2 microglia culture was stimulated with lipopolysaccharide (LPS) to establish an inflammatory cell model. Post-melittin exposure, cell viability, and cytokine expression were examined. MCPIP1 was silenced using siRNA in LPS-induced BV-2 cells, with the ensuing nuclear translocation of nuclear factor-κB assessed through cellular immunofluorescence. In vivo, melittin enhanced motor functions, diminished infarction, fostered blood flow restoration in ischemic brain regions, and markedly inhibited the expression of inflammatory cytokines (interleukin-1β, interleukin-6, tumor necrosis factor-α, and nuclear factor-κB). In vitro, melittin augmented MCPIP1 expression in LPS-induced BV-2 cells and ameliorated inflammation-induced cell death. The neuroprotective effect conferred by melittin was attenuated upon MCPIP1 knockdown. Our findings establish that melittin-induced tolerance to ischemic injury is intrinsically linked with its anti-inflammatory capacity. Moreover, MCPIP1 is, at the very least, partially implicated in this process.
Keywords | Author Contributions
XX, XZ designed the experiments and analyzed the data. XX and JF performed the behavioral experiments and assisted with getting mice tissues. CZ contributed to part of the acquisition of animal and cell data. XX, ZL, RD, and HH performed the ELISA, the western blot, and PCR analysis. XX wrote the manuscript, and XZ helped to revise and edit the manuscript. All authors read and approved the final manuscript.
Funding
This work was supported by the National Natural Science Foundation of China (Grant Numbers 81571292, 81601152) and the Natural Science Foundation of Hebei Province (Grant Number H2017206338).
Data Availability
All datasets generated or analyzed during this study are included in this article.
Declarations
Conflict of interest
The authors declare no conflict of interest.
Ethical Approval
All animal experiments in this study have been approved by the Institutional. Animal Care and Use Committee of the Second Hospital of Hebei Medical. University (permit No. HMUSHC-130318) per the National Institutes of Health (NIH) Guide for the Care and Use of Laboratory Animals. All protocols were performed to minimize pain or discomfort. | CC BY | no | 2024-01-15 23:41:52 | Neurochem Res. 2024 Oct 9; 49(2):348-362 | oa_package/fb/45/PMC10787673.tar.gz |
|
PMC10787674 | 38218743 | Introduction to Lymphatic Vasculature Structure and Development
The lymphatic vascular system functions to maintain homeostatic fluid and immune cell maintenance and surveillance in the body. Lymph fluid contains tissue-derived cell debris, immune cells, and/or other cells or cell components. Lymph is trafficked by lymphatic vessels for removal and monitoring of tissue homeostasis by lymph nodes. Lymphatic vasculature is organized based on vessel size and function. Capillary lymphatic vessels pick up lymph fluid containing cells and debris, transporting these tissue components to the larger collecting lymphatics. Lymphatic capillaries are specifically designed to carry out this function via their specialized endothelial cell junctions, termed “button-like” junctions. Button-like junctions contain gaps between the lymphatic endothelial cells (LECs) that allow for movement of lymph through the vessel walls [ 1 , 2 ]. These gaps are also large enough for cells, such as immune cells, to pass through with the lymph. These specialized endothelial cell junctions contain adherence junctions and tight junctions. Adherence junctions are characterized by their expression of Vascular Endothelial (VE)-Cadherin which binds through β-catenin to the cytoskeleton [ 3 ]. Tight junctions are characterized by their expression of Zonula Occludens 1 (ZO1) expression, which regulates VE-Cadherin junctions [ 3 ]. The gaps between individual cell junctions get closer together as the lymphatic vessels increase in size, with capillary lymphatics transforming into collecting lymphatics with tight “zipper-like” junctions [ 1 ]; collecting lymphatics are also supported by a lining of smooth muscle cells that contract to help pump lymph toward the lymph node and valves that ensure unidirectional lymph flow. These collecting lymphatics are connected to a series of lymph nodes; Lymph nodes contain immune cells, such as macrophages, dendritic cells, T cells and B cells, that survey lymph fluid for particles and/or pathogens to be eliminated [ 4 ]. Once filtered by the lymph nodes, the lymph returns to the blood vascular system by routing through the right and left thoracic ducts, which connect to the subclavian vein [ 4 , 5 ]. While this specialized system allows for immune cell uptake and transportation, this system also allows lymphatic vessels to transport tumor cells to other parts of the body [ 6 ].
Initial lymphangiogenesis occurs during embryogenesis, with undifferentiated endothelial cells arising from the mesoderm, and a lymphatic sac budding from the cardinal vein to form the initial lymphatic vessels at embryonic day 9.5 in mice [ 2 , 7 , 8 ]. Transcription factor prospero homeobox protein 1 (PROX1) expression is necessary for LEC differentiation, as its expression determines lymphatic fate of endothelial cells during early differentiation [ 9 , 10 ]. Furthermore, PROX1 is necessary and sufficient for LEC fate as downregulation promotes blood endothelial cell (BEC) identity and overexpression is sufficient to drive LEC fate and expression profiles [ 11 – 14 ]. After LEC differentiation, vessel formation begins with the establishment of pro-lymphangiogenic molecular signaling. The main pro-lymphangiogenic molecules identified to date are members of the vascular endothelial growth factor (VEGF) and receptor (VEGFR) families. VEGF signals through VEGFR to activate canonical pathways to promote cell survival, proliferation, invasiveness, and permeability–like what is known from angiogenesis. VEGFC and VEGFD are the two major family members characterized as pro-lymphangiogenic. VEGFC/D molecules are expressed by epithelial cells as well as stromal cells, including blood vascular and lymphatic endothelial cells, macrophages, and fibroblasts, while their receptors, VEGFR2/R3, are found almost exclusively on blood vascular and lymphatic endothelial cells [ 15 ]. VEGFC/D signaling through VEGFR2/R3, which are tyrosine kinase receptors, results in activation of protein kinase B (PKB), or AKT, and downstream extracellular signal-regulated kinase 1/2 (ERK1/2) pathways [ 16 ]. Activation of AKT/ERK pathways leads to LEC migration and survival [ 16 , 17 ]. Insight into the relative roles of VEGFC/D during initial development has been gained through studies of transgenic mice where Vegfc-/- mice failed to form lymphatic vessels [ 18 ] while Vegfd-/- only exhibited minor defects [ 19 ]. These studies suggest that VEGFC protein is required for initial development of lymphatic vasculature in the embryo, while VEGFD is likely more important for upkeep of lymphatic vasculature in the adult. Additionally, lymphatic cell-specific Cre-knockout of Vegfr2 gene in mice showed reduced lymphatic network formation but did not affect lymphatic vessel maturation or function [ 20 ]. Finally, VEGFR2 does not induce lymphatic vessel sprouting in adult mice, while VEGFR3 activation through VEGFC/D does [ 21 ]. VEGFR3 also maintains a positive feedback loop for Prox1 gene expression to help maintain LEC progenitors during embryogenesis in mice [ 22 ]. In summary, VEGFR3 appears to be the dominant receptor for pro-lymphangiogenic signaling during embryogenesis and in adult mice.
Other implicated factors in lymphangiogenesis include neuropilin molecules, NRP1 and NRP2–transmembrane glycoproteins that associate with VEGF and VEGFR family members and can activate the downstream signaling cascades to modulate LEC migration. While NRP2 binds to VEGFR3 [ 23 ] and interacts with VEGFC and VEGFD in LECs to promote lymphangiogenesis [ 24 ], NRP1 also associates with receptors PlexinA1 and VEGF to promote lymphatic function and valve formation [ 25 , 26 ]. Valve formation and smooth muscle cell association in collecting lymphatics is also promoted by transcription factor forkhead box C2 (FOXC2) [ 27 ] and growth factor ligand Angiopoetin 2 (ANG2) signaling through its tyrosine kinase receptor (TIE2) [ 28 ]. Furthermore, FOXC2 regulates cytoskeleton organization to stabilize LEC junctions and vessel structure in response to shear stress [ 29 ]. The ANG2/TIE2 receptor ligand interaction also serves to stabilize the zipper-like LEC junctions observed in collecting lymphatics [ 28 ]. An additional well-known lymphatic vessel-associated protein is podoplanin (PDPN), which marks LECs in the lymph node and in the collecting lymphatics. PDPN is necessary for maintenance of lymphatic vascular structure [ 30 ] and further differentiates LECs from BECs during embryonic development. Pdpn gene expression has also been implicated during development of various organs, including the heart and lungs, for cell proliferation and motility [ 31 ]. A knockout of PDPN shows deformed lymphatic vessel structure and flow [ 32 ]. While FOXC2, ANG2/TIE2, and PDPN mark collecting lymphatics, lymphatic capillaries have other specific markers. For example, chemokine CCL21 allows for recruitment of dendritic cells toward lymphatic capillaries during inflammation [ 2 , 33 ]. Additionally, a frequently used LEC surface marker is lymphatic vessel endothelial hyaluronan receptor 1 (LYVE1) [ 2 , 34 ]. LYVE1 is a glycoprotein receptor that binds to hyaluronan and is a proposed marker of neo-lymphatics or lymphatic capillaries. While downstream signals activated by LYVE1 as a hyaluronan receptor on LECs are not well characterized, LYVE1 is a homolog of CD44, which is also hyaluronan receptor known to activate lymphocytes, direct circulation of lymphocytes, and aid in tumorigenesis and lymph node metastasis in breast cancer [ 35 – 39 ]; thus, LYVE1 may serve a similar function as CD44 in LECs. Although the expression of PDPN and LYVE1 are found on lymphatic vessels, lymph nodes, and some blood vessels, we and others have identified PDPN and LYVE1 expression on a population of macrophages that associate with lymphatic vasculature, termed PoEMs, or Podoplanin-expressing macrophages [ 40 ]. This topic was covered extensively in a previous review on these specialized macrophages in the mammary gland [ 41 ]. PoEMs support lymphangiogenesis by developing into a pseudo-endothelial cell and incorporating into the lymphatic vessel structure in a manner that is dependent on PDPN signaling through C-type lectin-like type II receptor (CLEC2) [ 42 ]. In a mouse mammary tumor model, PoEMs localize near tumor-associated lymphatic vasculature and promote matrix remodeling, lymphatic sprouting, and tumor cell metastasis [ 40 ]. In addition to mammary remodeling and mammary tumorigenesis, “vascular mimicry” by macrophages has been reported in various neo-lymphangiogenic events in vivo and in vitro, including inflammation [ 43 , 44 ], wound healing [ 45 ], and other tumor-associated lymphangiogenesis [ 46 – 48 ]. While we have primarily focused on identifying PoEMs in the context of breast cancer-associated lymphangiogenesis, further characterization and identification of PoEMs during mammary morphogenic events—such as during establishment of initial lymphatic formation during embryogenesis and as the mammary gland outgrows during pubertal development—is warranted and an ongoing investigation in our lab. It is also important for studies to be undertaken to understand whether PoEMs differentiate from an influx of circulating monocytes and/or from resident mammary macrophages. Finally, in addition to venous-originated lymphangiogenesis, lymphatic vasculature also arises via hemogenic endothelial cells. During this process, endothelial cells spatially cluster together to form spontaneous lymphatic vessels in a process known as lymphvasculogenesis [ 49 ]. While lymphvasculogenesis has not been identified in the mammary gland, it may be proposed as an alternative origin of lymphatic vessels that may more easily incorporate other cell types, such as macrophages. This idea will be discussed later in this review. | Conclusions
In this review, we described what is known about lymphatic vasculature development throughout mammary gland development and its changing morphology. The establishment of lymphatics during these functional events may also contribute to the potential of tumor cell metastasis in breast cancer progression. Figure 2 shows the currently known factors that aid in characterizing lymphatic vasculature structure and lymphangiogenesis throughout mammary gland development, while also conveying gaps in this area—mainly for puberty, pregnancy, and lactation. Meanwhile, lymphatics during postpartum involution have been shown to help reestablish a “normal” mammary gland environment but could also be highjacked by a tumor during this phase to aid in metastatic spread. Since lymphatic vasculature in the mammary gland correlates with ductal morphology, investigating lymphangiogenesis during pubertal development could further elucidate mechanisms of tumor lymphangiogenesis, especially in nulliparous women. We have also described similar signaling mechanisms in pubertal development in regard to lymphangiogenesis as during postpartum involution, demonstrating a starting point for these future studies in pubertal development. Macrophages during postpartum involution and tumorigenesis also promote lymphangiogenesis, yet a similar role for macrophages in pubertal development remains to be explored. Overall, characterizing the role and structure of the lymphatic network, and subsequent changes during mammary morphogenesis, will help us further understand mechanisms of lymphangiogenesis for tumor progression. This may lead to insights into tumor microenvironments that carry a higher risk of metastasis and factors that can then be targeted to reduce metastatic progression in breast cancer.
| Lymphatic vasculature has been shown to promote metastatic spread of breast cancer. Lymphatic vasculature, which is made up of larger collecting vessels and smaller capillaries, has specialized cell junctions that facilitate cell intravasation. Normally, these junctions are designed to collect immune cells and other cellular components for immune surveillance by lymph nodes, but they are also utilized by cancer cells to facilitate metastasis. Although lymphatic development overall in the body has been well-characterized, there has been little focus on how the lymphatic network changes in the mammary gland during stages of remodeling such as pregnancy, lactation, and postpartum involution. In this review, we aim to define the currently known lymphangiogenic factors and lymphatic remodeling events during mammary gland morphogenesis. Furthermore, we juxtapose mammary gland pubertal development and postpartum involution to show similarities of pro-lymphangiogenic signaling as well as other molecular signals for epithelial cell survival that are critical in these morphogenic stages. The similar mechanisms include involvement of M2-polarized macrophages that contribute to matrix remodeling and vasculogenesis; signal transducer and activator of transcription (STAT) survival and proliferation signaling; and cyclooxygenase 2 (COX2)/Prostaglandin E2 (PGE2) signaling to promote ductal and lymphatic expansion. Investigation and characterization of lymphangiogenesis in the normal mammary gland can provide insight to targetable mechanisms for lymphangiogenesis and lymphatic spread of tumor cells in breast cancer. | Overview of Lymphatics and Mammary Gland Development
The mammary gland is first established during embryogenesis with formation of mammary rudiments from the mesenchyme in the mouse by embryonic day 10–12 [ 50 ]. Left-right asymmetry of the mammary line formation leads to independent development of paired mammary glands in the mouse with each gland having differential developmental signaling and expression patterning [ 51 ]. In humans, the establishment of the mammary gland bud begins as early as week 5 of gestation, and formation of mammary rudiments continues throughout gestation into early newborn stages and 4 weeks postpartum [ 52 , 53 ]. However, this initial functional and morphological evolvement of the female breast tissue persists until 2 years of age, after which the glands remain quiescent until puberty which occurs on average from ages 8.5 to 14 years [ 54 ]. The mammary rudiments expand during puberty to form a ductal tree, where hormonal signaling—including growth hormone (GH), prolactin (PRL), insulin-like growth factor (IGF) 1, and estrogen-mediated activation of the estrogen receptor (ER)—encourages elongation of the terminal end buds (TEBs) located at the tip of the mammary ducts [ 50 ]. The TEBs are surrounded by a layer of cap cells that lead invasion into the mammary fat pad [ 53 , 55 ]. Bifurcation and side branching of the TEBs occur to create a vast ductal network infiltrating the mammary fat pad [ 50 ]. This branching and elongation are regulated by several growth factors, such as amphiregulin (AREG), transforming growth factor beta 1 (TGFβ1), and epidermal growth factor (EGF) [ 50 ]. AREG can promote proliferation of LECs in mice [ 56 ]; TGFβ1 stabilizes vessel structure in mice [ 57 ]; and EGF receptor (EGFR) promotes lymphangiogenesis in normal development and tumor mouse models [ 56 , 58 – 60 ]. As elongation of the ducts occurs, highly proliferative ductal epithelial cells trail behind the invading cap cells with the inner epithelial cells forming a cleared lumen. These luminal epithelial cells are encapsulated by cap cells that differentiate into myoepithelial cells [ 55 ]; thus, forming the final functional structure of the mammary gland for the future stages of mammary morphogenesis.
The lymphatic network in the mouse mammary gland is first established during embryogenesis and undergoes remodeling along with the mammary gland throughout mammary morphogenic events. Evidence suggests that lymphatic vasculature aids in surveillance of the mammary tissue by immune cells as well as by functioning to clear away cell debris during these developmental stages. In humans, axillary lymph nodes drain 75–80% of lymph from breast tissue, demonstrating the importance of lymphatic development during mammary morphogenesis [ 61 ]. During early pregnancy, mouse mammary lymphatic vessel density (LVD) is increased and corresponds to increases in VEGFC and VEGFD expression [ 62 ]. The increased VEGFC/D expression then decreases during late pregnancy and lactation [ 63 ]. Furthermore, LVD, as measured by LYVE1 + or PROX1 + per mm 3 , was found to decrease during lactation compared to pregnancy [ 62 ]. This lack of lymphatic vessel identification may be explained by enlargement of ducts during lactation, making visualization of the vessels difficult. Mouse mammary lymphatic vessels are also found not associated with mammary alveoli during pregnancy and lactation, though lymphatic vessels in proximity to milk-producing alveoli could physically limit and block the milk supply [ 63 ]. Furthermore, enlarged intramammary and axillary lymph nodes during lactation in human patients can be detected via mammography [ 64 ], indicating immune surveillance of the mammary gland. This active immune surveillance through lymphatic vasculature is additionally supported by identification of a specialized subset of macrophages present in human milk during lactation that show a strong immune response in mice [ 65 ]. After lactation is complete and involution begins, there is an increase in apoptotic cell debris from the remodeling events [ 66 ], which is likely to be cleared from the mammary gland through the lymphatic vasculature. During the first, reversible phase of involution, VEGFC remains low, along with LVD [ 62 , 67 ]. During the second, irreversible phase, VEGFC is increased two-fold [ 62 ]. This increase in VEGFC coincides with an increase in VEGFR2/3 [ 62 ], leading to a remodeling of the lymphatic network during involution. A more in-depth review of postpartum lymphatics has been published [ 62 ]. Importantly, the new lymphatic structures that develop during involution have been postulated to persist in women up to 10 years post-partum and may aid in progression of postpartum breast cancers (PPBCs) [ 62 ]. VEGFC has also been shown to recruit tumor-associated macrophages (TAMs) into the mammary gland in mice [ 68 ]. Therefore, VEGFC could play a similar role during mammary morphogenic events by helping to recruit macrophages for remodeling and debris clearance during puberty, involution, and tumorigenesis. For example, blood serum from human lipedema patients had increased systemic VEGFC, which may have contributed to an increase in macrophage infiltration, yet there was no discernable change in LVD found in corresponding patient lipedema tissues [ 69 ]. These macrophages were identified as a subpopulation with overexpression of CD163, a scavenger receptor, which, when expressed on macrophages, aids in inflammation resolution [ 70 ]. CD163 expression is also associated with TAMs [ 71 ].
Lymphangiogenesis has yet to be fully characterized during pubertal development, which is when the mammary gland matures to be fully formed and posed to respond to the hormones of pregnancy. One study, by Betterman et al., investigated lymphangiogenesis in the post-embryonic developing mammary gland in mice [ 63 ]. Lymphatic vessels were found alongside and spiraled around elongating mammary ducts, like blood vessels, demonstrating that signaling from the mammary ductal cells may regulate lymphangiogenesis during this expansion event. LVD is also increased in mammary glands of MMTV-PyMT mice—a well-established mouse model characterized by increased epithelial cell proliferation that results in spontaneous tumors. This increase in LVD implies an increase in lymphangiogenic growth and/or patterning factors coinciding with the epithelial cell proliferation and expansion like what occurs during puberty [ 72 ]. Moreover, a mouse model with a reduced mammary ductal tree expansion (MMTV-specific Cre-inducible Gata3 KO) showed reduced LVD, further supporting a correlation between ductal elongation and lymphangiogenesis during pubertal development [ 63 ]. Investigation into signaling responsible for this correlation showed increased expression of pro-lymphangiogenic growth factors (VEGFC/D, PDGFA, PDGFB, FGF1, HGF) in myoepithelial cells compared to luminal epithelial and hematopoietic cells. Following pubertal development, VEGFC, and not VEGFD, is the primary pro-lymphangiogenic stimulus in the formation of the pregnancy-associated mammary gland [ 63 ]. In a Vegfd knockout mouse model, there was no change to ductal branching-associated lymphangiogenesis in the virgin and pregnancy-associated mammary gland, which is similar to what was seen during initial lymphangiogenesis in the Vegfd-/- embryo where there were no major changes in lymphatic vessel formation [ 19 ]. Overall, it is thought that the myoepithelium of the mammary ducts promotes lymphangiogenesis during post-embryonic development in the mammary gland and this is primarily driven by VEGFC.
Since lymphangiogenesis presumably correlates with proliferation of the myoepithelium, the left-right asymmetry established in the ductal tree formation during embryogenesis [ 51 ] could also be important for lymphangiogenesis. While left-right asymmetry of the mammary gland has been primarily investigated in mice and humans, it is thought to occur in most mammals due to bilateral development of mammary gland pairs [ 51 ]. This left-right asymmetry of the mammary ductal network leads to laterally-unique gene expression profiles between mammary glands, and can lead to differential oncogene activity and disease progression in right versus left mouse mammary tumors [ 73 ]. Furthermore, breast left-right asymmetry in breast volume measured via mammogram is a predicting factor for increased risk of breast cancer in human patients [ 74 ]. Therefore, the left-right asymmetry in ductal outgrowth during embryogenesis and pubertal development could establish left-right asymmetry for the lymphatic network in the mammary gland, and go on to similarly establish a basis for asymmetric lymphangiogenesis during tumorigenesis. However, left-right asymmetry remains to be investigated in mammary lymphangiogenesis. For example, it would be of interest to investigate whether a right mammary gland has increased VEGF expression during mammary morphogenesis compared to the left, and if this persists in lateral tumorigenesis and tumor-associated VEGF expression in mice. Additionally, we and others have shown that LVD is increased in breast tissues from recently pregnant women [ 75 ] and breast cancers from recently pregnant women, or postpartum breast cancers (PPBCs), have increased tumor associated LVD and increased risk for metastasis [ 67 ]. Therefore, since LVD correlates with mammary ductal outgrowth [ 63 ], similar asymmetrical lymphangiogenesis during postpartum involution could be an important factor in this increased risk.
In summary , although the role for lymphatic vasculature in the developing mammary gland during puberty is less well-studied, investigating the similarities between the remodeling events that occur during puberty and involution, including epithelial cell survival and stromal cell matrix remodeling, may lead to additional insights. By comparing these remodeling events (Fig. 1 ), we may further understand lymphangiogenic signaling that normally occurs in the mammary gland, which may lead to a deeper understanding of tumor-associated lymphangiogenesis. In the next few sections we will summarize cells and signals that orchestrate these remodeling events.
Macrophages
Macrophages are known to play a variety of roles in a variety of tissues and are typically identified on a “polarized” spectrum of function depending on expressed and secreted markers [ 76 – 78 ]. M1-polarized macrophages (CD38 + iNOS + TNFα + IL-1 + IL-6+) are known to play a pro-immunity role during immune responses such as pathogen recognition and clearance; M2-polarized macrophages (CD206 + ARG1 + IL-10 + IL-4+) are known to play an anti-inflammatory role and can be involved in resolution of immune responses as well as processes such as wound healing [ 77 ]. The impact of macrophages on lymphangiogenesis in the mammary gland has been primarily characterized during postpartum involution. During postpartum involution, macrophages express pro-lymphangiogenic stimuli and are primarily M2-polarized [ 42 , 79 , 80 ]. A conditional macrophage knockout mouse model (Macrophage colony-stimulating factor 1 (CSF1) receptor knockout) initiated during involution showed a delay in postpartum involution through suppression of epithelial cell apoptosis and adipocyte repopulation [ 80 ]. However, the impact of macrophages on pubertal lymphangiogenesis in the mammary gland has yet to be identified. Estrogen, a steroid hormone mostly associated with female reproductive development, is known to regulate ductal elongation during puberty via its receptors ERα and ERβ on mammary epithelial cells. Estrogen has also been shown to recruit macrophages to the developing mammary gland, likely through AREG and EGFR signaling [ 81 ]. Additionally, estrogen promotes postpartum involution through increased mammary inflammation, cell apoptosis, and adipocyte repopulation [ 82 ]. Therefore, estrogen could regulate macrophage involvement during pubertal development of the mammary gland. Furthermore, lymphatic endothelial cells express ERα, and estrogen can promote gene expression of lymphatic specific markers, such as Prox1 [ 83 ]. Beyond this, estrogen signaling has not been linked to lymphangiogenesis or lymphatic vascular stability, and therefore may be of interest to determine its contribution to regulation of macrophages in remodeling events such as during pubertal development and lymphangiogenesis.
Since VEGFC recruits macrophages to the mammary gland during tumor development [ 68 ], this mechanism could also be at play during pubertal development of the mammary gland. During pubertal ductal tree expansion, macrophages associate with developing TEBs for ductal elongation [ 84 ]. Consistent with a role for macrophages in ductal outgrowth, a macrophage-deficient mouse model (CSF1 knockout) showed defective outgrowth and branching during development [ 85 ]. Additionally, macrophages in the pubertal mammary gland are found to express M2-polarized macrophage marker, ARG1 [ 81 ]. Although the primary source of pro-lymphangiogenic signals in the developing ductal tree were found to be myoepithelial cells and not hematopoietic cells, which would include macrophages, macrophages could still be an important source of matrix remodeling signaling for lymphatic invasion in the mammary gland [ 63 ]. For example, bone marrow-derived macrophages (BMDMs) can express LYVE1, PROX1, and PDPN, and form lymphatic-like structures in vitro [ 47 ] and LYVE1 + PROX1 + PDPN + bone marrow derived cells undergo lymphatic vascular mimicry during neo-lymphangiogenesis in VEGFC-expressing pancreatic tumors in mice [ 47 ]. We have termed this macrophage-LEC interaction “macphatics”. Our group identified macphatics during mammary gland involution in mice as well as in breast cancer associated lymphatic vasculature in mice and in human tissues [ 41 , 42 , 62 ]. Other groups have shown macrophages associated with lymphatics throughout development, mainly through mouse models. For example, myeloid-derived macrophages colocalized with lymphatic vessels in the heart during embryonic development [ 86 ]; PDPN + cells derived from bone marrow with increased lymphangiogenic marker expression showed incorporation in lymphatics vessels in the cornea, wounded skin, and peritumoral melanoma tissues [ 87 , 88 ]; and CD11b + macrophages formed tube-like structures and expressed lymphatic markers during inflammation-associated lymphangiogenesis in the cornea [ 43 ]. Furthermore, cultured macrophages have been polarized in vitro toward VEGFR3-expressing lymphatic endothelial cell progenitors and then integrated into lymphatic vessels in an inflammatory mouse model [ 44 ]. We hypothesize similar processes may occur during pubertal mammary gland lymphangiogenesis with a role for macrophages as a pseudo-endothelial cell in addition to promoting ductal elongation. Additionally, macrophage-vascular mimicry and macphatics formation have not been investigated in the context of lymphvasculogenesis (defined above), which occurs from the hemogenic endothelium and is an origin of macrophages during embryogenesis. Although macrophage origin and differentiation outside their association with lymphatic vasculature is beyond the scope of this review, we wanted to bring attention to the similar hemogenic origin.
It is also unknown whether macrophages that express pro-lymphangiogenic markers are polarized to M1-like or M2-like phenotypes. It is widely regarded that tumor-promotional M2-like macrophages promote lymphangiogenesis due to expression of pro-lymphangiogenic factors. However, the involvement of M1- or M2-like macrophages in vascular mimicry have yet to be investigated. Our data suggests that PoEMs, which can form macphatics, more closely resemble M2-like macrophages [ 42 ]. Meanwhile, identification of the role of differentially polarized macrophages in in vitro angiogenic assays shows that M1-like macrophages are more likely to contribute to vessel formation and sprouting while M2-like macrophages contribute toward stability [ 89 ]. However, both M1- and M2-like macrophage populations are present in in vivo vascular grafts in mice with no discernable difference in contribution between macrophage subtypes [ 89 ]. Since the primary population of macrophages during postpartum involution and pubertal development have been identified to be M2-like, how M2-like macrophages contribute to lymphangiogenesis during development in the mammary gland compared to M1-like macrophages remains to be investigated. Additionally, while TAMs have been shown to express M1-like expression factors, such as TNFα [ 46 , 78 ], TAMs in the mammary gland are typically identified as M2-polarized with similar expression of pro-tumorigenic factors, such as VEGFC/D, matrix remodeling enzymes (MMPs), and ANG2 [ 46 ]. In pancreatic cancer, an increase in M2-polarized TAMs (CD163 + CD68+) correlated with an increased in LVD and poor prognosis in patients compared to unpolarized TAMs (CD69+) [ 90 ]. In a mouse model of lung adenocarcinoma, most TAMs were found to be M2-like (CD68 + CD206+) and increased M2-like TAMs correlated with increased LVD [ 91 ]. However, it has also been shown that there was no correlation with VEGFC-expressing TAMs and LVD in breast cancer patients [ 92 ]; however, these VEGFC-expressing TAMs did positively correlate with VEGFC-expressing tumors [ 92 ]. Taken together, M2-like macrophages contribute more greatly to lymphangiogenesis overall, yet this also demonstrates the need for further investigation in the mammary gland on how macrophages aid in pro-lymphangiogenic signaling versus vascular mimicry, and if this changes with macrophage polarization.
STAT Signaling
Signal transducer and activator of transcription (STAT) molecules are cytoplasmic transcription factors that regulate gene transcription after being phosphorylated and activated via cytokine signaling through Janus kinase (Jak) receptors [ 93 , 94 ]. While there are seven described mammalian STAT molecules, here we discuss three STATs that have been implicated in mammary gland morphogenesis: STAT3, STAT5, and STAT1. STAT3 expression is increased in the mouse mammary epithelium during postpartum involution, first by inducing an apoptotic response during the first phase of postpartum involution, followed by inducing an immune response and macrophage polarization during the second phase [ 95 ]. STAT3 is also implicated in VEGFC/VEGFR3 signaling during breast cancer. High expression levels of STAT3 in invasive breast cancer correlates with lymph node metastases, VEGFC/D, and VEGFR3 expression [ 96 ]; therefore, STAT3 may play an indirect role in promoting lymphangiogenesis during involution. Additionally, VEGF/VEGFR2 signaling has been shown to promote STAT3 activation leading to self-renewal of breast cancer cells in vitro [ 97 ] and VEGFA induces LEC migration and tube formation via STAT3 signaling [ 98 ].
Two additional members of the STAT family are isoforms of STAT5: STAT5A and STAT5B. Both have been implicated in mammary gland development and breast cancer progression. A Stat5a knockout mouse model shows reduced secondary and side branching during pubertal development in the mammary gland as well as reduced proliferation and differentiation of epithelial cells [ 99 ]. A Stat5b knockout mouse model also showed impaired mammary gland development, inconsistent viable pregnancies, and insufficient milk proteins during lactation [ 100 ]. Since STAT5A and STAT5B can heterodimerize as well as homodimerize as transcription factors, STAT5A and STAT5B are typically genetically manipulated together in mouse models. Dual STAT5A and 5B deletion in a mouse model prior to pregnancy showed inhibition of alveolar progenitor cell proliferation, differentiation, and cell survival during pregnancy [ 101 ]. The production of a STAT5 pro-survival signal allows for inactivation of STAT3 before involution can begin. Therefore, the opposing regulation of STAT3 and STAT5 during mammary morphogenic events could contribute to a regulation of lymphangiogenesis, which is yet to be investigated. Interestingly, in a mouse model, knocking out STAT5 expression in macrophages has been shown to increase expression of tissue remodeling factors, including collagen and VEGFA [ 102 ]. Furthermore, loss of STAT5 expression in these macrophages showed an increase in tumor size and lung metastasis [ 102 ]. Therefore, STAT expression in macrophages could also impact the role of macrophages in promoting lymphangiogenesis. Finally, STAT1 is phosphorylated and active in mature virgin glands and after involution [ 94 ], suggesting a similar signaling mechanism at play during puberty and involution, but not during pregnancy or lactation. In summary, STAT signaling between puberty and involution in mammary epithelial cells serves different purposes depending on the function of the morphological event, with pro-survival signaling versus apoptotic induction, respectively. However, there may be extrinsic mechanisms that are similar in regulation of ductal morphology that impacts corresponding lymphangiogenesis. Overall, this reflection provides support for further investigation into comparing STAT signaling in pubertal development to involution to determine how STAT signaling contributes to lymphangiogenesis during puberty and postpartum involution.
COX2/PGE2 Signaling
Cyclooxygenase 2, or COX2, is an important regulator of the lymphatic-associated “wound-healing” components during mammary gland postpartum involution. In a mouse model of postpartum involution, inhibition of COX2 via celecoxib during postpartum involution reduced LVD without significantly interfering with mammary gland involution morphology [ 67 ]. Additionally, COX2 and its downstream metabolite, prostaglandin E2 (PGE2), have been shown to promote lymphangiogenesis in the breast tumor microenvironment [ 103 , 104 ]. VEGFC and COX2 are significantly correlated with lymphangiogenesis and a poor prognosis of invasive breast cancer, including increased lymph node metastasis and worse overall survival of breast cancer patients [ 105 ]. In prostate cancer, COX2 correlates with VEGFC expression, tumor lymphangiogenesis, and lymphatic metastasis [ 106 ]. Furthermore, VEGFD promotes lymphatic vessel dilation through PGE2 [ 107 ], serving as a COX2-mediated mechanism for lymphatic tumor cell spread. Although little is known of the role of COX2 during pubertal mammary gland development, an MMTV-COX2 transgenic mouse crossed with a PGE2 receptor knockout mouse (Ep2-/-) shows normal ductal development compared to the expected mammary hyperplasia in a wild-type cross. This normal phenotype suggests that pubertal mammary ductal expansion is also regulated through PGE2 signaling via this receptor [ 108 ]. Therefore, a similar mechanism of promoting lymphangiogenesis through COX2 found during involution could be occurring during pubertal development, since increased ductal expansion during puberty correlates with increased LVD [ 63 ]. Finally, PGE2 may polarize macrophages to be M2-like [ 109 ], which are found in both the pubertal mammary gland and postpartum involuting mammary gland [ 79 , 81 ]. Therefore, macrophages may be transformed to promote lymphangiogenesis through similar mechanisms mediated by PGE2.
In summary , we have detailed that similar pro-lymphangiogenic signaling mechanisms, which have been characterized during postpartum involution, are also at play during pubertal development (Fig. 1 ).We reported that the stromal remodeling events during postpartum involution, such as increased COX2 expression and collagen deposition, creates a favorable microenvironment for breast cancer progression [ 110 ], easing access for tumor cells to metastasize through neo-lymphangiogenesis and macphatic formation. We propose that the macrophage-involved lymphangiogenic events, such as macphatics formation, could also be occurring during pubertal lymphangiogenesis. Therefore, lymphangiogenic signaling during pubertal mammary gland development should be studied, as this could provide insight to similar mechanisms occurring during tumorigenesis especially for nulliparous women who develop breast cancer.
Connecting Mammary Gland Lymphatic Development to Lymphatics in Breast Cancer
Breast cancer is the most common diagnosed cancer in women in the US and is the second leading cause of cancer-related deaths in women [ 111 ]. Breast cancer is also the primary tumor site most likely to lead to distant metastasis in women [ 112 ]. Although breast cancer metastasis can occur via blood vasculature or lymphatic vasculature, some mouse models indicate that breast cancer may preferentially metastasize through lymphatic vasculature and that passage through the lymph node may be required to enter the blood [ 113 – 115 ]. Additionally, stage and metastatic spread of breast cancer is initially determined with first identifying tumor cells in the draining lymph node, demonstrating the importance of studying mechanisms of lymphatic formation and structure in the mammary gland.
Angiogenesis/Lymphangiogenesis is one of the hallmarks of cancer [ 116 ]. Inducing vascular growth in tumors promotes blood supply and dilated lymphatic vessels, which is thought to allow for tumor cell migration to promote distant metastasis [ 117 ]. Tumor cell migration can also be aided by an increase in lymphatic flow from enlarged vessels. These enlarged vessels have been visualized via intravital microscopy and fluorescence photobleaching in a mouse tumor model with VEGFC overexpression [ 118 ]. VEGF expression is found to be increased in many solid tumors, including breast cancer [ 119 ] and expressed by tumor cells to promote pro-tumorigenic lymphangiogenesis [ 120 ]. As evidence that VEGFC contributes to breast cancer progression, overexpression of VEGFC in an orthotopic mouse model of breast cancer showed increased intratumoral lymphangiogenesis alongside increased metastasis to lymph node and lung [ 121 ]. Human breast cancers with a high expression of VEGFC are characterized by higher LVD in tissues biopsied from tumors, increased lymph node metastasis, distant metastasis and a worse prognosis [ 122 ]. VEGF expression can also be induced by the hypoxic environment in solid tumors via hypoxia inducible factors (HIFs) [ 123 ]. Normal developmental cues for lymphangiogenesis are also found to occur in tumor-associated lymphatic vessels, including expression of NRP2, which is typically expressed in prenatal lymphangiogenesis [ 124 ]. Similarly, vasculogenic mimicry, which we have identified during mammary gland involution, has been found in inflammatory breast cancer and ductal breast carcinoma [ 125 ]. Therefore, further characterizing mechanisms of lymphangiogenesis in normal mammary gland development may provide insight on tumor-associated lymphangiogenesis in the mammary gland and in breast cancer. | Author Contributions
P.A.D. and T.R.L. wrote the main manuscript. P.A.D. prepared all figures using BioRender.com. All authors reviewed the manuscript.
Declarations
Competing Interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:52 | J Mammary Gland Biol Neoplasia. 2024 Jan 13; 29(1):1 | oa_package/04/05/PMC10787674.tar.gz |
|||
PMC10787675 | 37878148 | Introduction
The prevalence of obesity has been rising for decades [ 1 ]. It is estimated that over one fifth of the female world population will be obese by the year 2025 [ 1 ]. Obesity is a well-known risk factor for the development of breast cancer in postmenopausal women and has been associated with a higher risk of disease recurrence and death following a diagnosis of early breast cancer (EBC) [ 2 – 6 ]. The causal relationship between obesity and breast cancer risk and prognosis is complex, but might, at least in part, be explained by an increased peripheral conversion of androgens to oestrogens [ 7 ].
A number of patients with breast cancer will eventually develop (distant) metastases [ 8 ]. In advanced (i.e., metastatic) breast cancer (ABC), mixed results have been reported on the prognostic effect of overweight and obesity [ 9 – 19 ]. Interpretation of these results is furthermore complicated by differences in patient and treatment characteristics, study population size, body mass index (BMI) categorisation, and endpoints. Moreover, the majority of studies in ABC exclude underweight patients or do not categorise them as a separate group of patients [ 10 , 11 , 14 – 19 ]. The French ESME cohort study however recently showed that underweight patients with ABC have a lower overall survival (OS) and first-line progression-free survival (PFS) when compared with normal weight patients with ABC [ 12 ].
Apart from clarifying the prognostic effect of BMI in patients with ABC, it might also be important to study potential effect modifiers, such as patient, tumour, and treatment characteristics. In the general population, for example, it was shown that the association between BMI and all-cause-mortality tends to differ by age [ 20 ]. In fact, several studies observed that overweight, when compared with normal weight, was a protective factor for all-cause mortality in older adults [ 21 – 23 ]. In patients with EBC, age-dependent associations between BMI and death from any cause have also been reported (Lammers S.W.M., Geurts S.M.E., van Hellemond I.E.G. et al. The prognostic and predictive effect of body mass index in hormone receptor-positive breast cancer [submitted for publication]) [ 24 ]. In addition, following advancements in systemic therapy over the years (i.e., the introduction of cyclin-dependent kinase (CDK) 4/6 inhibitors in the treatment of patients with hormone receptor-positive/human epidermal growth factor receptor-2-negative (HR+/HER2−) ABC), the prognostic effect of BMI might be modified by period and type of treatment [ 25 ].
The current study therefore aimed to address two research questions in a real-world cohort of patients diagnosed with HR+/HER2− ABC between 2007 and 2020 in the Netherlands. All patients received endocrine therapy with or without a CDK 4/6 inhibitor as first-given systemic therapy. The primary aim of this study was to evaluate whether BMI is an independent prognostic factor for both OS and PFS. The secondary aim of this study was to evaluate whether this prognostic effect of BMI is modified by age at diagnosis, period of treatment, or type of treatment. | Methods
Study design and population
Patients were identified from the Southeast Netherlands Advanced Breast Cancer (SONABRE) registry (NCT03577197), an ongoing prospectively maintained retrospective cohort study [ 26 ]. The SONABRE registry includes all patients (≥ 18 years) diagnosed with de novo or recurrent ABC from eleven hospitals in the southeast of the Netherlands since 2007. Information about patient, tumour, and treatment characteristics is retrospectively collected from medical files by trained registration clerks. Treatment, progression per treatment line, and survival data are updated annually.
For the current analysis, all patients diagnosed with HR+/HER2− ABC who received endocrine therapy with or without a CDK 4/6 inhibitor as first-given systemic therapy between 2007 and 2020 were identified from ten participating hospitals. Of note, in the Netherlands, CDK 4/6 inhibitors were implemented for treatment of HR+/HER2− ABC in August 2017 [ 27 ]. Patients with an unknown BMI at diagnosis were excluded as well as patients who received another type of systemic therapy or no systemic therapy. Data lock was on November 11, 2022.
Approval for the SONABRE registry was obtained from the Medical Research Ethics Committee of the Maastricht University Medical Centre (15-4-239).
Definitions
Tumours were considered HR+ if ≥ 10% of invasive cells had a positive nuclear staining of oestrogen and/or progesterone receptors. HER2-negativity was defined by an immunohistochemistry score of 0 or 1 or a negative fluorescence in situ hybridization result.
BMI was calculated from weight and height (BMI = weight [kg]/height [m] 2 ), measured by the treating physician or self-reported by the patient at diagnosis. In accordance with the World Health Organization criteria, BMI was categorised as underweight (< 18.5 kg/m 2 ), normal weight (18.5–24.9 kg/m 2 ), overweight (25.0–29.9 kg/m 2 ), or obese ( 30.0 kg/m 2 ).
Metastatic-free interval (MFI) reflects the time between primary breast cancer diagnosis and diagnosis of metastatic disease. An MFI of < 3 months was considered de novo metastatic disease. Endocrine resistance was defined as experiencing a relapse during or within 12 months after finishing adjuvant endocrine therapy. Endocrine sensitivity was defined as experiencing a relapse more than 12 months after completing adjuvant endocrine therapy or having no prior exposure to endocrine therapy.
Endpoints
The primary endpoint was OS, defined as the time between the start of first-given systemic therapy for ABC and the date of death from any cause. The secondary endpoint was PFS, defined as the time between the start of first-given systemic therapy for ABC and the date of progression or death. Progression was defined as occurrence of a new metastatic site or progression of existing metastases. These findings were based on imaging, the presence of tumour markers, and/or the presence of symptoms.
Statistical analysis
Baseline characteristics were compared between BMI classes using the Chi-squared test (categorical variables) and the Kruskal–Wallis test (continuous variables).
Median OS and PFS were calculated using the Kaplan–Meier method. Differences between BMI classes were assessed with the log-rank test. In the absence of an event, patients were censored at the last follow-up date. Patients subjected to a new line of therapy due to toxicity without progression of disease were also censored in the analysis of PFS as of the date of start of new treatment.
Multivariable Cox proportional hazards regression analyses were performed to evaluate whether BMI remained an independent prognostic factor for both OS and PFS. Multivariable analyses were performed in the total study population and in patients with metachronous metastases. Prognostic factors with a univariable p-value of ≤ 0.10 were included in the multivariable analyses. The following potential confounding factors were considered: age, WHO performance status, presence of comorbidities, MFI, number of metastatic sites, and site of metastases [ 12 , 25 ]. In patients with metachronous metastases, endocrine sensitivity was included as an additional confounding factor.
As the association between BMI and all-cause mortality differs by age in the general population [ 20 ] and systemic treatment of patients with HR+/HER2− ABC changed over time [ 25 ], analyses were stratified by age at diagnosis of ABC (< 60 versus ≥ 60 years), period of treatment (2007–2011 versus 2012–2016 versus 2017–2021), and type of first-line treatment (endocrine monotherapy versus endocrine therapy with a CDK 4/6-inhibitor). The BMI-by-age, BMI-by-period, and BMI-by-treatment interaction terms were calculated using likelihood ratio tests.
All statistical tests were conducted two-sided with a statistical significance threshold of p ≤ 0.05 and performed with SPSS (version 25) and Stata (version 17). | Results
Patient characteristics
Of 4365 patients included in the SONABRE registry between 2007 and 2020, 2709 patients were diagnosed with HR+/HER2− ABC (Fig. 1 ). After exclusion of patients without a BMI measurement at diagnosis (n = 764) or patients who did not receive endocrine therapy with or without a CDK 4/6 inhibitor as first-given systemic therapy (n = 489), the eligible study population consisted of 1456 patients. Among these patients were 35 (2%) underweight, 580 (40%) normal weight, 479 (33%) overweight, and 362 (25%) obese patients.
The presence of comorbidities and bone-only metastases increased significantly with an increasing BMI class, whereas the presence of visceral metastases decreased (p ≤ 0.001) (Table 1 ). When compared with other BMI classes, underweight patients had a worse WHO performance status and were more frequently diagnosed with de novo metastatic disease (p ≤ 0.001). In patients with metachronous metastases, the presence of endocrine sensitivity slightly differed between BMI classes (p = 0.04) (Supplementary Table 1). When compared with the percentage of endocrine-resistant patients (28%) in normal weight patients, the percentage of endocrine-resistant patients was higher in underweight (35%), overweight (35%), and obese patients (38%).
Overall, 1200 patients received endocrine monotherapy and 256 patients received endocrine therapy in combination with a CDK 4/6 inhibitor as first-given systemic therapy between 2007 and 2020. After the implementation of CDK 4/6 inhibitors in the Netherlands, between 2017 and 2020, 31% of patients received endocrine therapy in combination with a CDK 4/6 inhibitor as first-given systemic therapy (Supplementary Figure 1). The use of CDK 4/6 inhibitors was similar between BMI classes (p = 0.87). In the total study population (n = 1456), the majority of patients (80%) received an aromatase inhibitor as first-line endocrine therapy (Table 1 ). All other patients received either tamoxifen (11%), fulvestrant (8%), or another type of endocrine therapy (1%). First-line endocrine therapy choices were equally distributed among BMI classes (p = 0.39).
Prognostic impact of BMI on OS
The median follow-up time of the total study population was 60.9 months (IQR 37.5–96.0). No statistically significant difference in OS was observed between BMI classes, with a median OS of 28.5 months (95% confidence interval (CI) 10.5–49.5) in underweight, 38.8 months (95% CI 36.3–42.8) in normal weight, 39.8 months (95% CI 36.4–45.8) in overweight, and 38.8 months (95% CI 32.7–45.6) in obese patients (log-rank p-value = 0.14) (Fig. 2 a). However, after adjustment for potential confounders, the OS of underweight patients tended to be worse than the OS of normal weight patients (hazard ratio (HR) 1.45; 95% CI 0.97–2.15; p = 0.07), though not statistically significant. The OS of overweight and obese patients was similar to the OS of normal weight patients (adjusted HR 0.99; 95% CI 0.85–1.16; p = 0.93 and adjusted HR 1.04; 95% CI 0.88–1.24; p = 0.62, respectively) (Table 2 ). In patients with metachronous metastases, the detrimental effect of underweight on OS was stronger and statistically significant (adjusted HR 1.85; 95% CI 1.13–3.05; p = 0.02) (Supplementary Figure 2a and Supplementary Table 2; model 3). The prognostic effect of BMI on OS was not modified by age at diagnosis, period of treatment, or type of first-line treatment (Table 2 ).
Prognostic impact of BMI on PFS
The PFS was not statistically significantly different between BMI classes, with a median PFS of the first-given systemic therapy of 15.6 months (95% CI 7.6–20.5) in underweight, 15.5 months (95% CI 13.4–16.7) in normal weight, 16.8 months (95% CI 15.2–19.2) in overweight, and 17.3 months (95% CI 14.7–19.1) in obese patients (log-rank p-value = 0.16) (Fig. 2 b). After adjustment for potential confounders, when compared with normal weight patients, no statistically significant differences in PFS were observed in underweight (HR 1.05; 95% CI 0.73–1.51; p = 0.81), overweight (HR 0.90; 95% CI 0.79–1.03; p = 0.14), or obese patients (HR 0.88; 95% CI 0.76–1.02; p = 0.10) (Table 3 ). Similar results were observed in patients with metachronous metastases after additional correction for endocrine sensitivity (Supplementary Figure 2b and Supplementary Table 3; model 3). No signs of effect modification by either age at diagnosis, period of treatment, or type of first-line treatment were present (Table 3 ). | Discussion
In this study on a real-world cohort of 1456 patients diagnosed with HR+/HER2− ABC who received endocrine therapy with or without a CDK 4/6 inhibitor as first-given systemic therapy in the Netherlands between 2007 and 2020, we evaluated whether BMI is an independent prognostic factor for OS and PFS. In contrast to the findings in patients with EBC, we observed that neither overweight nor obesity was associated with either OS or PFS. Interestingly, however, we observed that underweight patients tended to have a lower OS when compared with normal weight patients.
Our results regarding the lack of association between a higher BMI and breast cancer outcomes are consistent with the results of other studies on patients diagnosed with HR+/HER2− ABC [ 11 – 13 , 15 ]. In a recent study of the French ESME cohort, for example, both overweight and obesity did not seem to affect the OS of 7844 patients diagnosed with HR+/HER2− ABC with a HR of 0.95 (95% CI 0.88–1.03) and a HR of 0.99 (95% CI 0.90–1.08), respectively, using normal weight as the reference [ 12 ]. Correspondingly, in a large pooled analysis of the MONARCH 2 and 3 trials including 1138 patients diagnosed with HR+/HER2− ABC who received either endocrine monotherapy or endocrine therapy in combination with abemaciclib, no differences in PFS were observed between normal weight and overweight and obese patients in both treatment arms [ 13 ]. These results are further corroborated by a study among 219 women with HR+ABC on first- or second-line treatment with an aromatase inhibitor, in which no difference in PFS was observed between patients with a BMI of < 27 kg/m 2 and patients with a BMI of ≥ 27 kg/m 2 [ 15 ]. Therefore, our results add to the available evidence on the lack of a prognostic effect of overweight and obesity in patients diagnosed with HR+/HER2− ABC.
The lack of a prognostic effect of both overweight and obesity in our cohort of patients with HR+/HER2− ABC stands in strong contrast with the well-documented adverse prognostic effect of overweight and obesity in patients with EBC [ 5 , 6 ]. For example, in a meta-analysis including patients diagnosed with HR+/HER2− EBC, obesity resulted in a statistically significant decrease in both disease-free survival (DFS) (HR 1.26; 95% CI 1.13–1.41) and OS (HR 1.39; 95% CI 1.20–1.62) when compared with normal weight [ 5 ]. The lack of a prognostic effect of overweight and obesity in patients with HR+/HER2− ABC might potentially be explained by the recently emerged “obesity paradox’’. This phenomenon is defined by the finding of an inverse rather than an adverse association between a higher BMI and (breast cancer) outcomes; a finding which has been observed in several studies among patients with metastatic cancer [ 28 – 33 ]. Potential mechanisms for the obesity paradox comprise both methodological and clinical explanations [ 28 , 29 ]. Methodological explanations, for example, include the use of BMI as an inadequate measurement tool for adiposity, confounding by smoking, detection bias, and reverse causation. On the other hand, clinical explanations include the presence of less aggressive tumours in obese patients, an enhanced treatment response in obese patients, and a greater energy reserve that may confer a survival benefit in the treatment of ABC.
An interesting finding of our study is the adverse prognostic effect associated with an underweight BMI classification, though results were not statistically significant and limited by the small number of underweight patients included in this study. Specifically, we observed that underweight patients tended to have a lower OS when compared with normal weight patients (HR 1.45; 95% CI 0.97–2.15). In the French ESME study mentioned earlier, underweight (versus normal weight) was also identified as a negative prognostic factor for OS (HR 1.11; 95% CI 1.01–1.22) [ 12 ]. Moreover, an adverse association between underweight and OS has also been observed in patients with EBC [ 34 , 35 ]. However, BMI does not distinguish between lean tissue and fat tissue, and may therefore not be the most appropriate measurement tool for body composition, and sarcopenia in particular [ 36 ]. It is important to mention this limitation of BMI as several smaller cohort studies have shown that sarcopenia is associated with an adverse prognosis in ABC [ 37 – 39 ]. Hence, the adverse prognostic effect of underweight may also be related to the presence of sarcopenia in our cohort.
The use of a large prospectively maintained retrospective cohort study including all patients diagnosed with HR+/HER2− ABC in the southeast of the Netherlands is a major strength of our study. The classification of underweight patients as a separate group is another strength of our study, even though the small number of patients impacted the power of the results. Our study also has some limitations. We did not collect information about BMI or weight change prior to diagnosis of ABC. It is possible that underweight patients lost weight shortly before diagnosis of ABC as a result of more aggressive disease, and consequently experienced an adverse prognosis. This phenomenon is referred to as ‘reverse causation’. In addition, 764 patients with HR+/HER2− ABC did not have a BMI measurement at diagnosis and were consecutively excluded from this study, possibly introducing selection bias.
In this large prospectively maintained retrospective cohort study including 1456 patients diagnosed with HR+/HER2− ABC, overweight and obesity were prevalent, while underweight was uncommon. In contrast to the findings in EBC, we showed that overweight and obesity do not impact the prognosis of patients with ABC. This lack of association was observed regardless of age at diagnosis of ABC, period of treatment, or type of first-line treatment. Interestingly, at the same time, we showed that underweight is a potential adverse prognostic factor for OS. However, as only a limited number of underweight patients were included in this study and information about BMI before ABC diagnosis and the presence of sarcopenia was lacking, our results should be considered as hypothesis-generating and therefore need to be confirmed in other studies. Nonetheless, these findings stress the importance of recognising underweight patients as a separate group of patients and support adequate monitoring of underweight patients. | Purpose
This study determines the prognostic impact of body mass index (BMI) in patients with hormone receptor-positive/human epidermal growth factor receptor-2-negative (HR+/HER2−) advanced (i.e., metastatic) breast cancer (ABC).
Methods
All patients with HR+/HER2− ABC who received endocrine therapy +—a cyclin-dependent kinase 4/6 inhibitor as first-given systemic therapy in 2007–2020 in the Netherlands were identified from the Southeast Netherlands Advanced Breast Cancer (SONABRE) registry (NCT03577197). Patients were categorised as underweight (BMI: < 18.5 kg/m 2 ), normal weight (18.5–24.9 kg/m 2 ), overweight (25.0–29.9 kg/m 2 ), or obese (≥ 30.0 kg/m 2 ). Overall survival (OS) and progression-free survival (PFS) were compared between BMI classes using multivariable Cox regression analyses.
Results
This study included 1456 patients, of whom 35 (2%) were underweight, 580 (40%) normal weight, 479 (33%) overweight, and 362 (25%) obese. No differences in OS were observed between normal weight patients and respectively overweight (HR 0.99; 95% CI 0.85–1.16; p = 0.93) and obese patients (HR 1.04; 95% CI 0.88–1.24; p = 0.62). However, the OS of underweight patients (HR 1.45; 95% CI 0.97–2.15; p = 0.07) tended to be worse than the OS of normal weight patients. When compared with normal weight patients, the PFS was similar in underweight (HR 1.05; 95% CI 0.73–1.51; p = 0.81), overweight (HR 0.90; 95% CI 0.79–1.03; p = 0.14), and obese patients (HR 0.88; 95% CI 0.76–1.02; p = 0.10).
Conclusion
In this study among 1456 patients with HR+/HER2− ABC, overweight and obesity were prevalent, whereas underweight was uncommon. When compared with normal weight, overweight and obesity were not associated with either OS or PFS. However, underweight seemed to be an adverse prognostic factor for OS.
Supplementary Information
The online version contains supplementary material available at 10.1007/s10549-023-07108-6.
Keywords | Supplementary Information
Below is the link to the electronic supplementary material. | Abbreviations
Advanced breast cancer
Body mass index
Confidence interval
Cyclin-dependent kinase 4/6
Disease-free survival
Early breast cancer
Human epidermal growth factor receptor 2
Hormone receptor-positive
Hazard ratio
Interquartile range
Metastatic-free interval
Overall survival
Progression-free survival
Acknowledgements
The SONABRE registry was funded by the Netherlands Organisation for Health Research and Development (ZonMw: 80-82500-98-8003), Roche, Pfizer, Novartis, Eli Lilly, Daiichi Sankyo, AstraZeneca, and Gilead. We would like to thank all SONABRE data managers of the department of Medical Oncology of the Maastricht University Medical Centre (MUMC +), Maastricht, The Netherlands.
Author contributions
SWML, HT, IJHV, MM, SMEG, and VCGT-H contributed to conceptualisation, methodology, and investigation; FLGE, MWD, BEPJV, KNAN, MJAEP, LMHvdW, AJvdW, NAJBP, and JT contributed to resources; SWML, HT, and SMEG performed data curation, formal analysis, and visualization; VCGT-H performed supervision; SWML, HT, IJHV, SMEG, and VCGT-H prepared the original draft of the manuscript; All authors carefully reviewed the first draft of the manuscript and provided feedback when necessary. SWML, IJHV, SMEG, and VCGT-H discussed feedback and prepared the final manuscript. All authors gave approval for publication of the final manuscript.
Funding
This work was supported by the Netherlands Organisation for Health Research and Development (ZonMw: 80-82500-98-8003), Roche, Pfizer, Novartis, Eli Lilly, Daiichi Sankyo, AstraZeneca, and Gilead.
Data availability
Data will be shared with interested researchers who are able to provide a methodologically sound proposal with well-defined research questions. Researchers are welcome to contact the corresponding author for more information at [email protected].
Declarations
Conflict of interest
SWML reports grants from AstraZeneca and Eli Lilly outside the submitted work. IJHV reports grants from AstraZeneca, Pfizer, and Eli Lilly outside the submitted work. MM reports institutional grants from Roche, Pfizer, Novartis, Eli Lilly, and Gilead during the conduct of the study. MWD had a consulting role for Novartis. JT had a consulting role for Amgen. NJAT reports institutional grants from Roche, Pfizer, Novartis, Eli Lilly, Daiichi Sankyo, AstraZeneca, and Gilead during the conduct of the study. SMEG reports institutional grants from Roche, Pfizer, Novartis, Eli Lilly, Daiichi Sankyo, AstraZeneca, and Gilead during the conduct of the study; personal fees from AstraZeneca outside the submitted work. VCGT-H reports grants and personal fees from AstraZeneca, Novartis, and Eli Lilly during the conduct of the study; grants from Roche, Pfizer, Daiichi Sankyo, and Gilead during the conduct of the study. VCGT-H has a consulting role for AstraZeneca, Eli Lilly, and Novartis. The other authors have declared no conflicts of interests.
Ethical approval
This study was performed in line with the principles of the Declaration of Helsinki. Approval was granted by the Medical Research Ethics Committee of the Maastricht University Medical Centre, Maastricht (METC 15-4-239).
Consent to participate
The need for informed consent was waived by the medical ethics committee. | CC BY | no | 2024-01-15 23:41:52 | Breast Cancer Res Treat. 2024 Oct 25; 203(2):339-349 | oa_package/1e/42/PMC10787675.tar.gz |
|
PMC10787676 | 0 | Introduction
In most bird species, breeding success and nesting behaviors change with age and experience, with experienced individuals usually better concealing their nests to improve nest survival (Marzluff 1988 ; Kleindorfer 2007a ; Öst and Steele 2010 ; Horie and Takagi 2012 ). Females may prefer older males, particularly in species where males build the nest. For instance, in the Small tree finch ( Camarhynchus parvulus ), males build the nest and males increase the proportion of black plumage in the head and chin with each annual molt until they attain a completely black head in their fifth year (Kleindorfer 2007a ). Female Darwin’s finches more quickly pair with older, darker males (Kleindorfer et al. 2019a ), and pairs with an older male experienced higher breeding success because of lower nest predation (Kleindorfer 2007a ). The proximate cause for lower nest predation in Small Tree Finches is thought to be nest placement, as nests of older males were more concealed and positioned higher up in the canopy, a pattern found in other studies too (Wappl et al. 2020 ; Heyer et al. 2021 ).
Males may use their own local breeding success as a patch quality cue that integrates the effect of various nest site attributes on breeding performance (Danchin et al. 1998 ; Doligez et al. 2002 ; Mariette and Griffith 2012 ). That is, males may return to a particular breeding site if they were previously successful at that site. Perhaps older males select safer nest sites based on experience and/or an assessment of prevailing predation risk. For example, Møller ( 1989 ) showed that older Northern Wheatear ( Oenanthe oenanthe ) males adjusted nesting height in relation to previous nesting outcome, with evidence that ground nesting birds may adjust their nest site and nest concealment according to predation risk. In general, older males have more breeding experience than younger males, though the mechanisms by which they evaluate previous experience and whether they make informed choices about future nest site selection is often unknown and can no doubt vary between systems.
In many species, individuals form breeding aggregations, which may carry significant benefits, such as protection from predators via a dilution effect that lowers individual detectability by predators, for example (Hamilton 1971 ; Rubenstein 1978 ). More individuals in an area can also increase predation risk by attracting predators to an area (Hassell et al. 1977 ; Hammond et al. 2007 ). While nesting in close proximity may increase the risk of extra-pair paternity or intra-specific brood parasitism (Brown and Brown 1989 ; Stewart et al. 2010 ), forming nesting associations with heterospecifics may circumvent that problem. For instance, in Darwin’s Finches on Santa Cruz Island, Galapagos, birds that nested in ‘mixed species associations’ with heterospecific neighbors had less nest predation (Kleindorfer et al. 2009 ). Close proximity to conspecifics also increases food competition, and local food abundance can affect the number and composition of individuals in an area (Forero et al. 2002 ; Booth 2004 ). Therefore, the neighborhood composition may influence nest survival, and in some systems, birds may avoid areas with many conspecific food competitors or favor areas with many heterospecific neighbors that provide additional anti-predator defense.
Nesting habitat also has consequences for the sensory experience of birds which in turn may influence their fitness. Songbirds are vocal production learners that acquire their song from conspecifics (Nelson et al. 1995 ; Catchpole and Slater 2008 ; Plamondon et al. 2008 ; Konishi 2010 ). Exposure to a tutor’s song that can become a song template is therefore a fundamental experience that guides song learning (Nottebohm 1972 ; Grant and Grant 1996 ). When individuals differ in song syllable composition and there is competition to transfer song syllable type to offspring (Evans and Kleindorfer 2016 ), fathers that nest in heterospecific neighborhoods may have an advantage to transmit their song type to offspring. Fathers in heterospecific neighborhoods should have less competition or interference for song syllable transmission compared to fathers with many conspecific neighbors because some offspring may attend to non-paternal conspecific song types (see also Katsis et al. 2018 ; 2023 ; Colombelli-Negrel et al. 2021 ). Learning and discrimination, including elementary forms of vocal production learning, can begin already during the egg in some songbird embryos. For example, Superb Fairywrens ( Malurus cyaneus ) produce a vocally acquired call after hatch copied from their (foster) mother’s in-nest call elements during incubation (Colombelli-Négrel et al. 2012 ), and embryos across avian taxa have been shown to learn to discriminate between sounds in ovo (Colombelli-Négrel and Kleindorfer 2017 ; Rivera et al 2018 ; Colombelli-Négrel et al. 2021 ). In an elegant field study, Mennill et al. ( 2018 ) showed that wild Savannah Sparrows ( Passerculus sandwichensis ) learned their songs from experimentally broadcast tutors placed near the nest in the wild. Thus, acoustic neighborhood is expected to play a significant role in vocal learning when vocal production learning embryos and nestlings are exposed to song in general, though to date, there are few studies that measure the acoustic neighborhood at the time of nesting across species.
There is a strong positive association between vegetation diversity and the avian diversity it supports (Lantz et al. 2011 ; Weisshaupt et al. 2011 ; La Sorte et al. 2020 ; Geladi et al. 2021 ). In forest systems, forests with more canopy cover and taller trees also sustained more bird species (Kirk and Hobson 2001 ). Similarly, in urban areas, avian species richness was higher in parks with more vegetation coverage (La Sorte et al. 2020 ). Vegetation cover may be associated with multi-level species richness as well as creating conditions for lower predation risk when songbirds select nest sites with more vegetation cover. In general, older males are expected to compete for and occupy better-quality territories (e.g., trees with broad canopy cover) that sustain more food resources (Sherry and Holmes 1989 ; Pärt 2001 ) and, when nest sites are more concealed in dense vegetation, lower predation risk (Hill 1988 ). A diverse heterospecific neighborhood could be a by-product of nest site preference for food or safety (associated with a dense vegetation cover), but in turn may facilitate other pathways, for example, acoustic habitat imprinting (see also Davis and Stamps 2004 ).
The aim of this study is to test if the nest sites of older male Small Ground Finches ( Geospiza fuliginosa ) and small tree finches differ in predictable ways from the nest sites of younger males, with specific attention to the singing activity of heterospecific and conspecific neighbors, as well as vegetation characteristics. First, we aimed to replicate the findings from Santa Cruz Island that older males occupy areas with more vegetation cover, in taller trees, and with higher nesting height in Floreana Island. Second, we test a new prediction that the acoustic neighborhood experienced by the offspring of older males will be more species rich with higher singing activity. Specifically, we predict that older males will nest in areas with more heterospecific neighbors and thus more heterospecific vocal activity while younger males will have more conspecific neighbors and more conspecific vocal activity. We also predict that the nest sites of older males will have more canopy cover and nests will be located higher up in taller trees. If nest predation is associated with singing activity (because sound alerts predators to an active area to search for nests), then we predict increased nest predation at nests with higher singing activity. | Methods
Study site and study species
This study was conducted on Floreana Island (− 1.299829, − 90.455674) during the onset of nesting and the Darwin’s finch breeding season that peaks during February and March and coincides with the onset of heavier rains usually during January and February (rainfall data can be accessed via https://www.galapagosvitalsigns.org ). The nesting data were collected during February–March 2020 and February 2022 at 55 Darwin’s finch nests (Table 1 ), including Small Ground Finches ( G. fuliginosa ) ( N = 33) and Small Tree Finches ( C. parvulus ) ( N = 22). The nests were located across eight 100 × 200 m 2 study plots in the highland Scalesia forest near Cerro Pajas or in two 100 × 200 m 2 study plots at Asilo de la Paz, also a Scalesia -dominated forest.
From a long-term study using color-banded birds, Darwin’s finches are socially monogamous per brood (Grant and Weiner 1999 ; Keller et al. 2001 ; Kleindorfer 2007b ). The onset of nesting occurs during the onset of heavier rains from January to March. Males use song and behavioral displays to defend small nesting territories (ca. 20 m 2 ) against intruders. During higher rainfall years, the males may build several nests while singing to attract females, and eventually a female may choose one of the nests for egg-laying (Kleindorfer 2007a ). However, during this study in 2020 and 2022, both years had low to moderately low rainfall and each male only built one display nest. The female is a uniparental incubator and the incubation phase lasts 12–14 days (Kleindorfer 2007a , b ). Both parents provide food deliveries to nestlings until they fledge after 12–14 days (Kleindorfer et al. 2021a ). Between 17 and 60% of highland Darwin’s finch nests are depredated across species and years (Kleindorfer 2007a , b ; Kleindorfer and Dudaniec 2009 ; O’Connor et al. 2010 ; Cimadom et al. 2014 ; Kleindorfer et al. 2021b ). In both species, males build a domed-shaped nest, often in Scalesia pedunculata trees. The avian vampire fly ( Philornis downsi ) is a major cause of nesting failure. On Floreana Island, newly built nests and nests with eggs do not contain P. downsi ; only nests with chicks contain the avian vampire fly larvae (Common et al. 2019 , 2023 ). In this study, 18 of the nests progressed to the chick stage in 2020 for which we also had information on number of P. downsi larvae and pupae at the time of nesting outcome; there was no association between male age and number of vampire flies ( r = − 0.002, p = 0.992, n = 18). In 2022, a year with low rainfall, Darwin’s finches built a display nest, sang at the nest, but no eggs were laid and hence there were no avian vampire flies in finch nests in the 2022 data.
Male age
Darwin’s finch males can be aged in the field using binoculars based on the proportion of black plumage. In Darwin’s tree finches (Fig. 1 ), the proportion of black plumage on the chin and crown increases with each year of molt until they obtain a fully black head after about five years (Lack 1947 ; Kleindorfer 2007a ; Langton and Kleindorfer 2019 ). In Darwin’s ground finches, the proportion of black plumage increases with each year of molt across five years (Fig. 2 ), until the male acquires full black plumage across its body (Grant and Grant 1987 ). Female Tree Finches remain olive green and female Ground Finches remain grayish across their lives and cannot be aged from plumage. The age classification of males is based on the six classes described by Grant and Grant ( 1987 ) for Small Ground Finches and by Kleindorfer ( 2007a ) for small tree finches (Figs. 1 , 2 ). The change in plumage with age gives us the rare opportunity to study the effects of age on nest site attributes, and how these are associated with the acoustic neighborhood near the nest, nest site vegetation, and predation outcome using an observational approach. The sample size per age class and species in this study is as follows: (i) small ground finches B1 = 2, B2 = 3, B3 = 4, B4 = 3, B5 = 21, and (ii) small tree finches B0 = 2, B1 = 1, B2 = 6, B3 = 2, B4 = 5, B5 = 6.
Nest monitoring and nest site characteristics
Nests were monitored following our standardized protocol that we developed in 2000 and maintained throughout the study (Kleindorfer et al. 2014 ; Common et al. 2020 ). Nests were routinely inspected, with binoculars and ladder during 2004 to 2006, and since 2008 with a borescope, every three days during incubation and every two days during the nesting phase to confirm activity. Nesting height estimation was practiced using a laser pointer (LTI laser rangefinder) prior to field work, which we did using clearly visible trees on-campus at Flinders University, Australia. The laser rangefinder was first pointed at the base of the tree and then the top to compute two vertical angles, from which tree height was calculated. We calibrated among team members at the start of the field season and visually estimated tree height as meters above ground during field work.
We measured the following nest-site vegetation characteristics per nest within two weeks of nest building: (1) nesting height (m above the ground; ocular estimation after training with a laser pointer device on-campus at Flinders University), (2) nesting tree height (ocular estimation after training with a laser pointer device on-campus at Flinders University), (3) percentage canopy cover 1 m around the nest (ocular estimation after training calibration with botanist Heinke Jaeger in the field in 2020), and (4) percentage ground cover (ocular estimation calculated for 4 × 5 m quadrants at the base of the nest).
Video and audio recordings at nests
Video and audio data were collected using GoPro cameras (GoPro Hero 7, GoPro Inc.) placed within 5 m of the nest. GoPro cameras were attached to metal hooks and hung on branches with an extendable 6 m pole 1–5 m from the nest. Each nest was recorded during either building, incubation and/or feeding once (sample size in Table 1 ). The average GoPro recording duration (min) per nest was 33 ± 3 (mean ± Standard Deviation). We did two to three recordings per day, per nest. We used the first and last recording of each nest for our analyses (Mean ± SD = 1.95 ± 1.1 recordings per nest were used). All recordings were made between 0600 and 1000 during the month of February, which is generally the month with the onset of nest building in Darwin’s finches on Floreana Island.
Solomon coder (Péter 2019 ) was used to systematically extract information from video recordings to calculate the number of singing events in the neighborhood of the nest. All songs heard were recorded and sampled at a radius of ~ 25 m per nest, as this was the detectability of sound recordings on the GoPro.
Species identification from song recordings
There are a total of six songbird species in the highlands of Floreana Island, and birds from six other avian taxa (Kleindorfer et al. 2019b ) (see Table 2 ). Songs and calls were compared against a long-term data base managed by Kleindorfer for two decades with 7000 + songs and calls from most species; if a sound could not be identified, the clip was posted on the Galapagos Land Bird WhatsApp group and long-term Galapagos ornithologists (e.g., Birgit Fessl, Thalia Grant, Tui de Roy) provided their expert opinion, which always achieved 100% consensus. The sound identification was also facilitated because only 12 avian land bird taxa (Table 2 ) are present in the highlands of Floreana Island. The calls of the species listed are identifiable species signals and hence, after training on available recordings and with expert advice, it is likely that all vocalizations were correctly classified to species level.
Data analysis
All data analyses were conducted using R v.4.1.0 (R Core Team 2021 ). We analyzed the following variables: (1) male age (assessed from plumage categories shown in Figs. 1 and 2 ), (2) number of total singing events per minute (conspecific + heterospecific songs) in the vicinity of active nests, (3) subset: number of heterospecific singing events per minute, (4) subset: number of conspecific singing events per minute, (5) number of neighboring nests in a 35 m radius of the focal nest (we selected this cut-off as it could have overlapped with the 25 m audible recording range of the GoPro recordings), (6) vegetation canopy cover (% cover), (7) ground cover (%), (8) tree height (m), (9) nesting height (m), and (10) breeding status (nest building, incubation, chick feeding). In terms of nesting outcome, we analyzed variables in relation to whether the nest was depredated or not, but only for the nests recorded in 2020 as this information is not available for 2022 (the field work ended before nesting outcome was known).
To test our predictions, we used linear mixed models with the package ‘lme4’ (Bates et al. 2015 ) and ‘arm’ (Gelman 2011 ). The distribution of the residuals and the models’ assumptions were tested and assessed visually using the package ‘DHARMa’ (Hartig 2021 ). For every prediction, we first conducted a general model without the species distinction and a second model where species was considered separately. First, we explored the general pattern for a difference between younger and older males regardless of the species. Next, we tested if there is a difference in this effect between the species.
We used a pseudo-Bayesian framework with non-informative priors using the packages ‘arm’ (Hilbe 2009 ; Gelman 2011 ) and ‘lme4’ (Bates et al. 2015 ). For every linear mixed model (package ‘lme4’), the restricted maximum-likelihood estimation method was applied. In each model, we applied the function ‘sim’ and carried out 10,000 simulations to obtain the posterior distribution of every estimate, the mean value and the 95% credible interval (CrI) (Korner-Nievergelt 2015 ). CrIs provide information about uncertainty around the estimates. We considered an effect to be statistically meaningful when the 95% CrI did not overlap with zero. A threshold of 5% is equivalent to the significance level in a frequentist framework (i.e. p -value of 0.05; Korner-Nievergelt 2015 ). For depredation, the response variable was binary (0 = no predation event, 1 = nest depredated) and modeled with a binomial distribution using the logit-link function.
Male age and heterospecific singing activity
To analyze whether older males build nests in sites with more heterospecific singing activity, we used two linear-mixed-effect models (REML fit). In both, the response variable was the number of heterospecific songs per minute. In the first model, the explanatory variables were the total number of nests within 35 m (proxy for nesting density) and male age. In the second model, the explanatory variables were the total number of nests within 35 m, the male age and the interaction between male age and species. In both models, Nest ID was included as a random factor to account for repeated measures in a same nest and breeding status to account for the variance across different breeding stages.
Male age and conspecific singing activity
To analyze the converse of our predicted association between male age and the number of heterospecific neighbors, we tested if younger males have nest sites with more conspecific neighbors and more conspecific singing activity (and hence, likely, more conspecific competition). We used the same approach as above. Namely, two linear-mixed-effect models (REML fit) with the response variable ‘number of conspecific singing events per minute’. In the first model, the explanatory variables were the male age, the total number of nests within 35 m (proxy for nesting density) and their interaction. In the second model, the explanatory variables were the total number of nests within 35 m and the male age in interaction with species. In both models, Nest ID was included as a random factor to account for repeated measures of the same nest and breeding status to account for differences the breeding phase. Here, the residual diagnostics in both models showed slight (but still acceptable) deviation in one assumption (slight deviation in residual vs. predicted quantiles) that could probably be overcome with larger sample sizes. In 2022, the onset of singing activity occurred later in the season and singing activity was lower, likely because rainfall was lower in 2022 than in 2020 (Floreana data: mean rainfall Feb 2022 = 2.3 mm; mean historic rainfall Feb = 104.1 mm; https://www.galapagosvitalsigns.org ); also, there were many zero values for conspecific song in 2022 compared with 2020 though heterospecific song activity had few zero values in either year.
Effect of male age on nest site vegetation characteristics
Before assessing if vegetation characteristics of nest sites differed between older and younger males, we first performed a spearman correlation test among all the vegetation variables that we measured: canopy cover, ground cover, tree height and nesting height (Figure S1). We used a Spearman correlation because the different variables were not normally distributed and the distribution ‘types’ varied significantly among each other. Ground cover and canopy covered were highly correlated among each other ( rho = − 0.491, p < 0.001), and this was also the case between tree height and nest height ( rho = 0.783, p < 0.001). Because of this and because previous research identified an association between canopy cover and nesting height on nesting success in this system, we used these two variables in the models to test the association between male age and nest site vegetation characteristics.
The degree of association between male age and nest site canopy cover and nesting height was estimated using one linear model per variable. Each model had male age and species as explanatory variables, and their interaction.
Effect of number of singing events (general song-activity) and nest site vegetation on predation outcome
We used binary logistic regression with nest predation outcome (0 = not depredated, 1 = depredated) as the binary-dependent variable against total number of songs per minute, nesting height, and nest site canopy cover as predictor variables. | Results
Male age and heterospecific singing activity
Older males had significantly more heterospecific singing activity near the nest ( n = 55, Mean estimate [95% CrI] = 2.088 [0.447, 3.714], Table S1a) compared to younger males (Fig. 3 ). This pattern was strongest in Small Ground Finches (Mean effect size [95% CrI] = 2.14 [0.14, 4.19]; n = 33), and weak in Small Tree Finches (Mean effect size [95% CrI] = 0.53 [− 2.31, 3.38]; n = 22; Table S1). The number of nesting neighbors did not influence the heterospecific singing activity in the territory; neither did the breeding status during which the nesting territories were recorded (Table S1).
Male age and conspecific singing activity
There was no evidence that the level of conspecific singing activity within 25 m radius of a male’s nest changed with male age (Fig. 4 ). This also held true when accounting for both species separately in the statistical model (Mean effect size [95% CrI] for small ground finch = − 0.35 [− 2.01, 1.31], and for small tree finches = − 0.91 [− 2.54, 0.69]). Rather, the overall number of neighbors was associated with the number of conspecific singing events ( n = 55, Mean estimate [95% CrI] = 1.738 [0.024, 3.470], Table S2).
Effect of male age on nest site vegetation characteristics
We tested if nesting height and canopy cover at the nest site was associated with male age. We found the same pattern in both species. The nesting height did not vary in relation to male age (Fig. 5 , Mean Slope [95%CrI] for small ground finches = − 0.07 [− 0.45, 0.32], for small tree finches = 0.06 [− 0.32, 0.45], Table S). Regarding vegetation, older male small tree finches nested in areas with significantly more vegetation cover (Fig. 5 , Mean slope [95%CrI] = 6.33 [1.72, 11.07], Table S3). Male small ground finches did as well (note the large mean effect size), but with a modest statistical support (Fig. 5 , Mean Slope [95%CrI] = 3.32 [− 1.42, 8.11], Table S3).
Effect of number of singing events (general song-activity) and nest site vegetation on predation outcome
We know nesting outcome with certainty for 32 nests (24 Small ground finches and 8 small tree finches). Using binary logistic regression analysis, there was no effect of average number of songs per minute on nest predation ( r = 0.09, N = 32, p = 0.847), and no association with nesting height ( r = 0.528, p = 0.324), but more concealed nests had less predation ( r = − 0.38, p = 0.036, Fig. 6 ) and, specifically, older males had less predation ( r = − 0.18, p = 0.047). The percentage of depredated nests was comparable between small tree finches (2/8, 25%) and Small Ground Finches (5/24, 21%). | Discussion
The main aim of this study was to test if nest site characteristics, such as vegetation cover and the acoustic neighborhood, differed across male age in two Darwin’s finches: the small tree finch and the small ground finch. As predicted, older males built nests in areas with more vegetation concealment and these nests had less predation. Neither song activity near the nest or nesting height predicted nest predation. A novel finding of this study is that nest sites of older males were exposed to more heterospecific singing activity, and hence such nest sites can be described as occurring in a richer acoustic neighborhood.
The nest sites of younger and older males differed in several ways, and more research is needed to examine the mechanisms for these patterns. Younger males nested in areas with more conspecific neighbors, and older males nested in areas with more heterospecific neighbors, with more vegetation cover surrounding the nest. Perhaps older males outcompete younger males for access to preferred habitat. In support of this idea, we have observed male take-overs of nests, and in all cases, older (B5) males supplanted and usurped younger (B0, B1) males from nests they had built (Kleindorfer et al. 2021b ). Because older males also have larger badge size (the extent of black plumage on the crown and chin), it is possible that badge size (rather than age per se) predicts the outcome of agonistic interactions, as has been shown in other systems (Olsson 1994 ). While younger male Darwin’s finches may occasionally build a nest in an area with dense vegetation cover that also has many heterospecific neighbors, these nests could subsequently be usurped by older males. Younger males may have a preference for the same nest sites as older males but cannot exercise their choice as they are outcompeted by older males. It remains to be tested if younger males actively avoid areas with older males to reduce the probability of nest usurpation and/or paternity loss through cuckoldry.
Our finding that vegetation cover was associated with lower predation risk adds to a body of evidence linking reduced visual conspicuousness of nests with reduced nest predation (Martin and Roper 1988 ; Colombelli-Négrel and Kleindorfer 2009 ). On Floreana Island, there are five nest predators of Darwin’s finch nesting contents: introduced Rat ( Rattus rattus ), introduced House Mouse ( Mus musculus ), introduced Cat ( Felis catus ), introduced Smooth-billed Ani ( Crotophaga ani ), and endemic Short-eared Owl ( Asio flammeus galapagoensis ). The number of rats and owls has increased across the past decade (Kleindorfer, unpublished data), not least because owls feed on the ever-increasing rat population. Rats are olfactory hunters that are more common predators at nests closer to the ground and owls are visual hunters that are more common predators at nests higher in the canopy (Kleindorfer et al. 2021b ). In a previous study, we showed that nests at intermediate heights sustained the most larvae from the introduced Avian Vampire Fly (Kleindorfer et al. 2016 ; 2021b ), which is the biggest risk factor for the survival of Darwin’s finches (Kleindorfer and Dudaniec 2016 ; Fessl et al. 2018 ; McNew and Clayton 2018 ). Therefore, it is perhaps not surprising that we did not find an effect of nesting height on predation outcome in this study. Future research should explore effects of male age on nesting success and number of vampire flies after the planned predator eradication and predator translocation on Floreana Island managed by the Galápagos National Park Directorate (GNPD). In regard to vegetation cover and biodiversity, our study builds on previous research that found greater biodiversity in areas with greater vegetation diversity (Lantz et al. 2011 ; Weisshaupt et al. 2011 ; La Sorte et al. 2020 ; Geladi et al. 2021 ), and more bird species in areas with more canopy cover (kirk and Hobson 2001 ) or vegetation coverage (La Sorte et al. 2020 ). Our study is also in accordance with previous studies on Santa Cruz island that measured less predation at more concealed nests built by older Darwin’s finch males (Kleindorfer 2007a ; Wappl et al. 2020 ; Heyer et al. 2021 ).
We acknowledge this is an observational study that aimed to explore whether the acoustic neighborhood of males differed in relation to their age class. Possibly the most novel implication of this study is the finding that offspring of older males were exposed to a richer acoustic neighborhood than offspring of younger males. How such an acoustic neighborhood with more heterospecific singing birds might influence neural development (Rivera et al. 2019 ; Schroeder and Remage-Healey 2021 ), gene expression (Antonson et al. 2021 ), tutor preference (Williams 1990 ), attention (Soha and Marler 2000 ; Chen et al. 2016 ), social learning strategy (Farine et al. 2015 ) or other vocal production learning pathways (Katsis et al. 2018 , 2021 ; Mariette et al. 2021 ) remains to be explored. Darwin’s finches are capable of species recognition of song (Ratcliffe and Grant 1995 ), with reduced response to experimental broadcast of local song versus heterospecific song or foreign dialects (Colombelli-Négrel and Kleindorfer 2021 ). Perhaps early-life exposure to different song types influences the magnitude of song discrimination, or the efficacy of song transmission from father to son, which remains to be tested.
It is possible that younger males return to natal sites, or sites that look and sound like their natal site, based on vegetation and acoustic cues. Similar processes have been described for habitat imprinting, for example in cuckoos (Teuschl et al. 1998 ). In a review of the phenomenon of natal habitat preference induction (NHPI), Davis and Stamps ( 2004 ) found evidence for NHPI across a broad range of animal taxa. Our study provides a complementary perspective by raising the possibility that acoustic habitat imprinting may play a role in systems with early-life vocal production learning. The findings raise new research questions about mechanisms of nest site selection using acoustic cues, and ontogenetic consequences of different sound exposure for development and sound preference. In the Darwin’s finch system, older males build display nests in areas with more vegetation cover, males compete for access to these nest sites, females select these nests and males, and offspring are—likely as a by-product—exposed to a richer heterospecific neighborhood. A rich acoustic neighborhood, even if it is ‘only’ a by-product of other preferences shaping nest site selection, could have significant impact on offspring development, which future research could explore.
In summary, there is some evidence presented here that older Darwin’s finches of the Galapagos Islands build nests in areas that may be considered local biodiversity hotspots, because they have more vegetation cover and more heterospecific singing neighbors. While the larger badge size of older males could predict occupation of such (potentially) preferred habitats, little research has been done into the possible effects of natal acoustic neighborhood on individual learning strategy, vocal phenotype, or fitness of offspring growing up in those nests. During this Anthropocene era (Lewis and Maslin 2015 ), when both human activity and infrastructure, and noise and light pollution, are increasingly impacting wildlife, this study provides an example of baseline variance in nest site characteristics in areas without a large human sound footprint. With the observations presented in this study, we hope to spark research interest into consequences of early-life acoustic exposure for development and fitness in vocal production learning species. | Communicated by S. Bouwhuis.
Nesting success tends to increase with age in birds, in part because older birds select more concealed nest sites based on experience and/or an assessment of prevailing predation risk. In general, greater plant diversity is associated with more biodiversity and more vegetation cover. Here, we ask if older Darwin’s finch males nest in areas with greater vegetation cover and if these nest sites also have greater avian species diversity assessed using song. We compared patterns in Darwin’s Small Tree Finch ( Camarhynchus parvulus ) and Darwin’s Small Ground Finch ( Geospiza fuliginosa ) as males build the nest in both systems. We measured vegetation cover, nesting height, and con- vs. heterospecific songs per minute at 55 nests (22 C. parvulus , 33 G. fuliginosa ). As expected, in both species, older males built nests in areas with more vegetation cover and these nests had less predation. A novel finding is that nests of older males also had more heterospecific singing neighbors. Future research could test whether older males outcompete younger males for access to preferred nest sites that are more concealed and sustain a greater local biodiversity. The findings also raise questions about the ontogenetic and fitness consequences of different acoustical experiences for developing nestlings inside the nest.
Supplementary Information
The online version contains supplementary material available at 10.1007/s10336-023-02093-5.
Alterseffekte bei Darwinfinken: Ältere Männchen bauen Nester mit mehr Vegetationsdecke und haben mehr heterospezifisch singende Nachbarn .
Der Nesterfolg nimmt bei Vögeln tendenziell mit dem Alter zu, was zum Teil darauf zurückzuführen ist, dass ältere Vögel aufgrund ihrer Erfahrung und/oder der Einschätzung des vorherrschenden Prädationsrisikos besser versteckte Nistplätze auswählen. Im Allgemeinen ist eine größere Pflanzenvielfalt mit einer größeren Artenvielfalt und mehr Vegetationsdecke verbunden. Wir untersuchen hier, ob ältere Männchen der Darwinfinken in Gebieten mit größerer Vegetationsdecke nisten und ob diese Nistplätze auch eine größere Vogelartenvielfalt aufweisen, die anhand des Gesangs beurteilt wird. Wir verglichen die Muster beim Zwergdarwinfink ( Camarhynchus parvulus ) und Kleingrundfink ( Geospiza fuliginosa ), da die Männchen in beiden Systemen das Nest bauen. Wir haben an 55 Nestern (22 C. parvulus , 33 G. fuliginosa ) die Vegetationsdeckung, die Nisthöhe und die kon- vs. heterospezifischen Gesänge gemessen. Wie erwartet, haben ältere Männchen beider Arten ihre Nester in Bereichen mit größerer Vegetationsdecke gebaut, und diese Nester wurden auch seltener ausgeraubt. Eine neue Erkenntnis ist, dass die Nester der älteren Männchen auch mehr heterospezifisch singende Nachbarn hatten. Künftige Untersuchungen könnten prüfen, ob ältere Männchen jüngere Männchen um den Zugang zu bevorzugten Nistplätzen verdrängen, die besser versteckt sind und eine größere lokale Artenvielfalt aufweisen. Die Ergebnisse werfen auch Fragen zu den Konsequenzen der unterschiedlichen akustischen Erfahrungen für die Entwicklung der Nestlinge im Nest auf.
Keywords
Open access funding provided by Austrian Science Fund (FWF). | Supplementary Information
Below is the link to the electronic supplementary material. | Acknowledgements
We thank the Galapagos National Park for permission to conduct research (permit no. PC-02–20 and PC-73-21) and the Charles Darwin Foundation for logistical support. We thank all team members for assistance with nest monitoring and data collection, especially David Arango Roldán, Mario Gallego-Abenza, Andrew Charles Katsis, Jefferson García Loor, Alena G. Hohl, Leon K. Hohl, Jody O’Connor, Petra Pesak, and Verena Puehringer-Sturmayr. This publication is contribution number 2538 of the Charles Darwin Foundation of the Galapagos Islands.
Author contributions
SK and ACH conceived the idea and designed the study; ACH, LCK, CA, DCN, and SK collected the data; ACH, NMA, and SK analyzed the data; ACH wrote the first draft of the manuscript. All authors commented on the manuscript.
Funding
Open access funding provided by Austrian Science Fund (FWF). This study was funded by the Australian Research Council (DP190102894) awarded to SK and DCN and the Austrian Science Fund (W1262-B29) awarded to SK.
Data availability
Data are available on the Flinders University data repository at DOI: 10.25451/flinders.23664561.
Declarations
Conflict of interest
The authors declare no financial or non-financial competing interests.
Ethical approval
This research was approved by Flinders University Animal Welfare Committee (E480-19). | CC BY | no | 2024-01-15 23:41:52 | J Ornithol. 2024 Jul 13; 165(1):179-191 | oa_package/aa/64/PMC10787676.tar.gz |
|
PMC10787677 | 0 | Introduction
Proteins are hydrolyzed by proteases (EC 3.4.11-24) to yield amino acids and bioactive peptides. Aspartic proteases (EC 3.4.23), commonly called acid proteases, have been widely used in different foods, such as cheese, bread, beverages, meat, and bioactive peptides (Mamo and Assefa 2018 ; Rocha et al. 2021 ).
They have two highly conserved aspartic acid residues located at the center of the active site responsible for their catalytic activity (Guo et al. 2020 ). Pepsin-like and chymosin-like aspartic proteases are the two main types of aspartic proteases. Pepsin-like aspartic proteases are mainly derived from Trichoderma sp., Penicillium sp., and Aspergillus sp., while chymosin-like aspartic proteases are mainly derived from Endothia sp., Rhizopus sp., and Mucor sp. (Da Silva et al. 2016 ; Siala et al. 2009 ). Generally, aspartic proteases have optimal pH values within the range of pH 2.0–6.0 and are stable in an acid environment (Horimoto et al. 2009 ). They prefer to cleave peptide bonds between residues with hydrophobic side chains, such as Leu-Tyr, Phe-Phe, and Phe-Tyr (Mandujano-González et al. 2016 ). Most reported aspartic proteases are mesophilic and show optimal temperatures between 40 °C and 60 °C (Souza et al. 2017 ; Yang et al. 2013 ). Aspartic proteases are first synthesized as inactive precursors (zymogens), which can effectively protect the proteases from damage caused by the body (Dunn 2002 ). They are usually autocatalytically activated under acid conditions (Guo et al. 2020 ). Besides, aspartic proteases have broad substrate specificity, and their activities are inhibited by pepstatin A (Guo et al. 2021 ; Souza et al. 2017 ).
Currently, cloning and expression of protease genes by genetic engineering technology are effective ways to identify novel proteases. To date, various proteases have been successfully expressed in Komagataella phaffii ( Pichia pastoris ) (Guo et al. 2021 ; Mechri et al. 2021 ; Song et al. 2021 ). Microorganisms are the preferred sources of proteases, owing to their rapid growth, simple cultivation, and convenience for genetic manipulation (Mamo and Assefa 2018 ). The alkaline serine protease from Trichoderma koningii expressed in K. phaffii displayed enzyme activity of 15,900 U/mL (Shu et al. 2016 ). The neutral protease from Aspergillus oryzae expressed in K. phaffii exhibited enzyme activity of 43,101 U/mL (Ke et al. 2012 ). However, the expression level of acid proteases (aspartic proteases) in K. phaffii is relatively low. The aspartic protease from A. repens expressed in K. phaffii showed enzyme activity of only 1.4 U/mL (Takenaka et al. 2017 ). Two aspartic proteases from Talaromyces leycettanus and Penicillium sp. XT7 expressed in K. phaffii displayed enzyme activities of 67.8 U/mL and 89.3 U/mL, respectively (Guo et al. 2019 ; Guo et al. 2021 ). The aspartic protease from Trichoderma harzianum expressed in K. phaffii exhibited enzyme activity up to 328.1 U/mL (Deng et al. 2018 ). The aspartic protease from A. niger expressed in K. phaffii showed enzyme activity of 1500 U/mL (Wei et al. 2023 ). Thus, high-level expression of aspartic proteases has great application potential.
Bioactive peptides are functional short-chain amino acid sequences that alleviate diseases and have no side effects on human health (Singh et al. 2022 ). Compared with antihypertensive drugs, angiotensin-I-converting enzyme (ACE) inhibitory peptides produced by the enzymatic hydrolysis of food-derived proteins have become a preferred choice to lower blood pressure due to their safety and no side effects (Gomes et al. 2020 ). Duck blood has been applied to improve economic values in the preparation of bioactive peptides, such as antioxidant peptides (Yang et al. 2020 ) and ACE inhibitory peptides (Wang et al. 2021 ). However, most of the proteases used for the preparation of ACE inhibitory peptides are commercial proteases such as pepsin, trypsin, papain, and bromelain. Therefore, the exploration of novel proteases for the preparation of bioactive peptides will attract much attention.
In this study, a novel aspartic protease gene ( Tapro A1) from Trichoderma asperellum was cloned and expressed in K. phaffii . Fed-batch fermentation was performed for the production of Ta proA1 in a 5 L fermenter. Ta proA1 was further purified and characterized. Moreover, its valuable application potential was evaluated for the preparation of duck blood peptides with high ACE inhibitory activity. This study aims to provide a suitable protease for the enzymatic conversion of duck blood proteins. | Materials and methods
Strains, plasmids, and reagents
T. asperellum CAU126 was screened, identified, and preserved in China General Microbiological Culture Collection Center (CGMCC No. 3.5921). Escherichia coli strain DH5α (TransGene, Beijing, China) was employed as the host for the cloning and sequencing of Ta proA1. The K. phaffii GS115 (his4, Mut + , Invitrogen) strain was the chassis host for Ta proA1 expression. pEASY-Blunt (TransGen, Beijing, China) and pPIC9K (Invitrogen, Carlsbad, CA, USA ) plasmids were utilized as the cloning and the expression vectors, respectively. FastPfu DNA polymerase, NEBbuilder® HiFi DNA Assembly Master Mix, and restriction enzymes (NEB, Frankfurt, Germany) were used for DNA manipulation. Duck blood hemoglobin and plasma protein were obtained from Handan Xinheng Biotechnology Co., Ltd. Casein sodium salt from bovine milk was purchased from Sigma-Aldrich (St. Louis, MO, USA), and all other reagents used herein were commercially accessible and analytical grade.
Sequence analysis and expression of Ta proA1
The SignalP 4.1 server ( http://www.cbs.dtu.dk/services/SignalP ) predicted the Ta proA1 signal peptide sequence. The molecular weight and isoelectric point of the mature Ta proA1 protein were predicted using the ExPASy ProtParam tool ( https://web.expasy.org/protparam/ ). Clustal Omega was applied to perform multiple sequence alignments ( https://www.ebi.ac.uk/Tools/msa/clustalo/ ). NetNGlyc 1.0 ( http://www.cbs.dtu.dk/services/NetNGlyc/ ) and NetOGlyc 4.0 ( http://www.cbs.dtu.dk/services/NetOGlyc/ ) were used to analyze the glycosylation sites. The maximum likelihood method in MEGA 7.0 was used to construct the phylogenetic tree, which was then evaluated with 1000 bootstrap replicates (Kumar et al. 2016 ).
Genomic DNA and total RNA were extracted from the mycelia of T. asperellum using the fungal DNA and RNA Midi kit (TianGen, Beijing, China). The PrimeScriptTM RT-PCR kit was used to reverse transcribe RNA into cDNA (Takara, Osaka, Japan). The Tapro A1 gene was amplified using DNA and cDNA as templates with the primers Ta proA1-F/R (Table S1 ). The PCR products were ligated into the pEASY-Blunt vector (TransGen, Beijing, China) for sequence analysis of the Tapro A1 gene.
The restriction enzymes EcoR I and Not I were utilized to digest the expression vector pPIC9K (Invitrogen, Carlsbad, CA, USA). The coding sequence (without the signal peptide) of Tapro A1 was fused into the digested pPIC9K plasmid to yield the recombinant vector pPIC9K- Ta proA1 by the seamless cloning method. Then, the plasmid pPIC9K- Ta proA1 was confirmed by DNA sequencing and linearized with the restriction endonuclease Sac I. The digested pPIC9K- Ta proA1 plasmid was electrically transformed into K. phaffii GS115 competent cells. The colonies were collected, and their genomes were extracted. The Tapro A1 gene was integrated into the K. phaffii GS115 genome using the two primers, 5′AOX1 and 3′AOX1 (Table S1 ) . These screened positive colonies were cultivated in the BMMY medium to express Ta proA1, and the protease activity was determined to verify the successful expression of Ta proA1 in the K. phaffii GS115 host.
Production of Ta proA1 in a 5 L fermenter
According to the K. phaffii fermentation instruction manual (Invitrogen, San Diego, CA, USA), fed-batch fermentation was carried out by the engineering K. phaffii GS115 strain in a 5 L fermenter for the production of Ta proA1. The engineering K. phaffii GS115 strain was cultivated in the shake flask containing YPD medium until the OD 600 was approximately 10.0 for inoculation. Batch culture, glycerol feeding culture, and 100% methanol induction culture were three stages that were performed during the whole fermentation process. At the methanol induction culture stage, the pH value was adjusted to pH 6.0 with 28% ammonia water, and 100% methanol was added to maintain the content of dissolved oxygen above 20%. The protease activity, protein content, and cell wet weight of the sample were determined during the fermentation phase.
Purification of Ta proA1
The fermentation supernatant was obtained by centrifugation at 12,000 rpm and 4 °C for 10 min and then concentrated by the membrane package (10 kDa). The crude enzyme was dialyzed overnight in buffer A (20 mM phosphate buffer pH 6.0) and loaded on the Q-Sepharose Fast Flow (QSFF) column that was pre-equilibrated with buffer A. Unbound proteins were washed using buffer A, and a linear NaCl gradient in elution buffer B (20 mM phosphate buffer pH 6.0, 500 mM NaCl) was used to elute the bound Ta proA1 protein at a flow rate of 1.0 mL/min. Sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE, 12.5%) was used to analyze the purity of Ta proA1. The gel was stained with Coomassie brilliant blue R-250.
Protease activity and protein content
Ta proA1 activity was determined according to the previously described method with minor modifications (Ichishima 1970 ). In brief, 100 μL of appropriately diluted Ta proA1 (50 mM citrate buffer pH 3.0) was mixed with 100 μL of casein (1%, w/v) solution (prepared in the same buffer) and incubated at 50 °C for 10 min. Then, the reaction was stopped by the addition of 200 μL of 0.4 M trichloroacetic acid (TCA) solution. After 3 min of centrifugation at 10,000 rpm, 100 μL of supernatant was mixed with 500 μL of 0.4 M sodium carbonate solution, followed by the addition of 100 μL of the Folin phenol reagent. After 20 min of incubation at 50 °C, the mixture was cooled to room temperature, and the absorbance was determined at 680 nm. One enzyme activity unit (U) was defined as the amount of protease required to hydrolyze casein to produce 1 μg tyrosine per min under the assay conditions.
The Lowry method was performed to determine the protein content, and bovine serum albumin (BSA) was used as the standard protein (Lowry 1951 ). The enzyme activity per milligram of protein was defined as the specific activity (U/mg).
Biochemical characterization of Ta proA1
The optimal pH of Ta proA1 was determined by evaluating the protease activity in 50 mM buffers of various pH values (glycine-HCl, pH 1.5–3.0; citrate, pH 2.5–7.5; and Tris-HCl, pH 7.0–8.0 buffers). Ta proA1 was pre-incubated in the above buffers at 40 °C for 30 min to measure the pH stability. The optimal temperature of Ta proA1 was determined at different temperatures (30–70 °C) in 50 mM citrate buffer pH 3.0. Ta proA1 was pre-incubated for 30 min at the desired temperature ranges to evaluate the thermostability. The concentration of Ta proA1 used in all the assays was 1.73 mg/mL. The protease activity was performed using casein (1%, w/v) as a protein substrate.
Effects of metal ions, chemical reagents, and inhibitors on Ta proA1 activity
A solution of Ta proA1 (1.73 mg/mL) was incubated with various metal ions (Ba 2+ , Ca 2+ , Co 2+ , Cr 3+ , Cu 2+ Fe 2+ , Fe 3+ , Li + , Mg 2+ , Mn 2+ , Sn 2+ , Sr 2+ , and Zn 2+ ), chemical reagents (EDTA, SDS, and Triton X-100), and protease inhibitors (Pepstatin A, iodoacetamide, PMSF, and EDTA) at 50 °C for 30 min, and then the residual protease activities were evaluated under the optimal conditions (pH 3.0, 50 °C). All the above assays were performed using casein (1%, w/v) as a protein substrate. The final concentration of chemical reagents and metal ions was 1 mM, and protease inhibitors were 0.01 mM–5 mM. A control without reagents was defined as 100% activity.
Substrate specificity and cleavage sites of Ta proA1
The substrate specificity of Ta proA1 was determined using different protein substrates (1%, w/v) such as casein, myoglobin, hemoglobin, bovine serum albumin, albumin HAS, skimmed milk, albumin egg, whey protein, soy protein isolate, gelatin, azo-casein, β-lactoglobulin, protamine sulfate, and collagen under the optimal conditions (pH 3.0, 50 °C). The protease activity determined using casein was defined as the control (100%). The oxidized insulin B chain (0.1%, w/v) was mixed with Ta proA1 (5 U/mL) in 50 mM citrate buffer pH 3.0 for 1 and 12 h. An equal volume of 0.1% (v/v) trifluoroacetic acid was added to stop the reaction. To determine the cleavage sites, the mixtures were analyzed by MALDI-TOF/MS (AB Sciex 4800 plus, USA).
Preparation of duck blood peptides by Ta proA1
Based on the high hydrolysis activity of Ta proA1 towards myoglobin and hemoglobin, the duck blood hemoglobin and plasma proteins were hydrolyzed by Ta proA1 (E/S: 1000 U/g, pH 3.0, 50 °C) for 3, 6, and 9 h. The hydrolysis reaction was stopped by heating in boiling water for 10 min to inactive Ta proA1. The hydrolysate supernatants were collected to further analyze the ACE inhibitory activity and molecular weight distribution. The protein recovery rate was evaluated by the Kjeldahl and Lowry methods (Lowry 1951 ). The degree of hydrolysis (DH) was determined by the o-phthaldialdehyde (OPA) assay (Nielsen et al. 2001 ), and the DH was calculated according to the following formula: where h is the number of hydrolyzed bonds in the hydrolysates; h tot is the total number of peptide bonds per protein equivalent; for duck blood, α and β are 1.0 and 0.4, respectively; SerineNH 2 = meqv serine NH 2 /g protein; X is the mass of sample (g); and P is the protein content (%) in the hydrolysates.
A high-performance liquid chromatography (HPLC) system was utilized to analyze the molecular weight (Mw) distribution of the hydrolysates (Xie et al. 2014 ). The molecular weight distribution of the hydrolysates was divided into four fractions (< 1 kDa, 1–5 kDa, 5–10 kDa, and > 10 kDa).
The in vitro ACE inhibitory activity was determined according to the previous method (Cushman and Cheung 1971 ). Briefly, 20 μL of hydrolysate solution and 120 μL of substrate solution (5 mM N-Hippuryl-His-Leu hydrate in 0.1 M sodium borate buffer pH 8.3 containing 0.3 M NaCl) were incubated at 37 °C for 5 min. Then, 10 μL of ACE (0.1 U/mL) was added to start the reaction at 37 °C for 60 min. Next, 150 μL of 1 M HCl was added to stop the reaction. Hippuric acid was extracted by the addition of 1 mL ethyl acetate, and the mixture was centrifuged at 4000 rpm for 10 min. The supernatant (750 μL) was collected and dried in an oven at 105 °C for 30 min. The released hippuric acid was dissolved in 500 μL of deionized water, and the absorbance was measured at 228 nm. Sodium borate buffers pH 8.3 (0.1 M containing 5 mM N-Hippuryl-His-Leu hydrate and 0.3 M NaCl) without the hydrolysate and ACE were used as a control and as a blank, respectively. The following formula was used to calculate the in vitro ACE inhibitory activity: where A a represents the absorbance of the sample, A b represents the absorbance of the control, and A c represents the absorbance of the blank. The IC 50 value (mg/mL) is defined as the concentration of hydrolysate that inhibited 50% of the ACE activity.
Statistical analysis
The results were expressed as the mean ± standard deviation (SD). IBM SPSS 21.0 software (SPSS Inc., Chicago, IL, USA) was used to analyze all statistical data. One-way ANOVA indicated significant differences at P < 0.05, and Duncan’s multiple range tests were used. | Results
Bioinformation analysis of Ta proA1
A novel aspartic protease gene ( Tapro A1) from T. asperellum was cloned and identified (GenBank no. GFP56020.1). The full-length gene was 1643 bp with an intron of 77 bp and an open reading frame of 1566 bp encoding 521 amino acid residues (Fig. S1 ). The isoelectric point ( pI ) and molecular weight of mature Ta proA1 were predicted to be 4.08 and 36 kDa, respectively. Ta proA1 showed 16 potential O-glycosylation sites and no N-glycosylation sites. It contained a predicted signal peptide sequence (20 aa), a pro-peptide sequence, and a mature catalytic domain sequence. Two highly conserved aspartic acid residues (Asp225 and Asp411) were located at the active site to play a catalytic role (Fig. S2 ).
Multiple sequence alignments revealed that Ta proA1 shared 52.8% sequence identity with the aspartic protease PEP3 from Coccidioides posadasii C735 (GenBank no. C5PEI9.1), followed by 50.7% sequence identity with aspartic protease PEPA from Penicillium rubens Wisconsin 54-1255 (GenBank no. B6HL60.1) (Fig. 1 ). A phylogenetic tree was constructed to identify the evolutionary relationship between Ta proA1 and other A1 family aspartic proteases (Fig. S3 ), suggesting that Ta proA1 belongs to the Aspergillopepsin I family containing the typical strictly conserved characteristic motif “DTGT/S”. Ta proA1 belongs to a new unknown branch with Q4WZS3 and is far from other A1 family proteases. These results indicated that Ta proA1 is a novel member of the A1 family aspartic protease.
High-level production and purification of Ta proA1
The recombinant strain with high protease activity screened by Geneticin G418 was subjected to fed-batch fermentation for the production of Ta proA1 in a 5 L fermenter. After 144 h, the protease activity, protein concentration, and cell wet weight were up to 4092 U/mL, 10.2 mg/mL, and 368 g/L, respectively (Fig. 2 A). The protein concentration of Ta proA1 gradually increased, and a protein band of approximately 36 kDa was detected by SDS-PAGE during the fermentation process (Fig. 2 B). Ta proA1 was purified to homogeneity by ion-exchange chromatography with a recovery yield of 52.8% and 1.7-fold purification (Fig. 3 and Table S2 ). The purification profile of Ta proA1 on the QSSF column is shown in Fig. S4 . The specific activity of purified Ta proA1 was 685.0 U/mg towards casein (Table S2 ).
Biochemical characterization of Ta proA1
Ta proA1 showed optimal activity at pH 3.0 (Fig. 4 A), and more than 80% of its initial activity was retained in the pH ranges of 3.0–6.0 (Fig. 4 B). The optimal temperature of Ta proA1 was 50 °C (Fig. 4 C). Ta proA1 displayed good stability up to 45 °C, which more than 80% of its initial activity was retained (Fig. 4 D). Cu 2+ exhibited a promoting effect on protease activity, while Ba 2+ had no effect. Cr 3+ , Fe 3+ , Fe 2+ , and Sr 2+ inhibited protease activity by 11.9%, 13.3%, 17.7%, and 19.6%, respectively. Triton X-100 decreased the protease activity by 41.1%, whereas SDS completely inhibited the protease activity (Table S3 ). Pepstatin A (0.02 mM) completely inhibited its activity, indicating that Ta proA1 is an aspartic protease. EDTA and iodoacetamide had no significant effect on Ta proA1 activity, while PMSF slightly inhibited the enzyme activity (Table 1 ).
Substrate specificity and cleavage sites of Ta proA1
The substrate specificity of Ta proA1 towards different protein substrates is shown in Table 2 . Ta proA1 exhibited broad substrate specificity and excellent hydrolysis activity towards myoglobin (116.4%) and hemoglobin (113.5%), followed by bovine serum albumin (82.9%), albumin HAS (45.8%), skimmed milk (36.4%), albumin egg (25.7%), whey protein (12.9%), soy isolate protein (7.8%), gelatin (1.6%), and azo-casein (1.2%). In contrast, Ta proA1 did not hydrolyze β-lactoglobulin, protamine sulfate, and collagen. Furthermore, Ta proA1 spliced the oxidized insulin B-chain at 15 bonds (H4-Q5, C7-G8, G8-S9, S9-H10, H10-11L, 11L-12V, 12V-13E, 13E-14A, 14A-15 L, 15 L-16Y, 16Y-17L, 17L-18V, 22R-23G, 23G-24F, and 24F-25F) (Fig. 5 ).
Preparation of duck blood peptides by Ta proA1
The ACE inhibitory activity of duck blood proteins hydrolysates was analyzed. When plasma protein and hemoglobin from duck blood were hydrolyzed for 3 h, the protein recovery rates were 52.3% and 41.6%, respectively (Fig. 6 A, B). There was no significant difference among the DH of hemoglobin and plasma proteins at 3 and 6 h, respectively. The DH of hemoglobin and plasma proteins was 59.5% and 54.0%, respectively, which reached a maximum at 9 h (Fig. S5 ). As shown in Fig. S6 and Table 3 , the molecular weight distribution of the duck plasma protein hydrolysate changed from 73.2% (Mw >10 kDa) to small peptides with Mw < 1 kDa (82.1%) at 9 h. For the duck hemoglobin hydrolysate, the molecular weight distribution changed from 75.8% (Mw > 10 kDa) to small peptides with Mw < 1 kDa (82.6%) at 6 h. The duck plasma protein hydrolysate exhibited the highest ACE inhibitory activity of 97.9% at 9 h, and the duck hemoglobin hydrolysate exhibited the highest ACE inhibitory activity of 52.9% at 6 h (Fig. 6 C, D). The IC 50 values of duck plasma protein and hemoglobin hydrolysates were 0.091 mg/mL and 0.105 mg/mL, respectively. | Discussion
In this study, a novel aspartic protease gene ( Tapro A1) from T. asperellum was successfully mined and expressed in K. phaffii. The pro-peptide sequence plays a vital role in the folding and secretion of active proteases. It is automatically removed by self-cleavage during the maturation process (Demidyuk et al. 2010 ; Peng et al. 2021 ). The band size of expressed target protein suggested that Ta proA1 was secreted as a mature enzyme through autocatalytic activation (Fig. 2 B). The neutral metalloproteinase NPI from A. oryzae and the alkaline serine protease SPTK from Trichoderma koningii were efficiently expressed in K. phaffii , and the protease activities were 43101 U/ml and 15900 U/ml, respectively (Ke et al. 2012 ; Shu et al. 2016 ). However, the expression levels of acid proteases in K. phaffii , such as RmproA (3480.4 U/mL) from Rhizomucor miehei CAU432 (Sun et al. 2018 ), Apa1 (1500 U/mL) from A. niger (Wei et al. 2023 ), MCAP (410 MCU/mL, rennet activity) from Mucor circinelloides (Kangwa et al. 2018 ), PsAPA (89.3 U/mL) from Penicillium sp. XT7 (Guo et al. 2021 ), TAlP (67.8 U/mL) from Talaromyces leycettanus JCM12802 (Guo et al. 2019 ), and TAASP (18.5 U/mL) from Trichoderma asperellum (Yang et al. 2013 ) are relatively low (Table S4 ). Here, the expression level of Ta proA1 (4092 U/mL) was significantly higher than those of most aspartic proteases produced in K. phaffii (Table S4 ). Therefore, the high-level expression of Ta proA1 should be beneficial for potential applications.
Ta proA1 was purified to homogeneity by QSSF with a recovery yield of 52.8% (Table S2 ). Compared with other proteases, the purification efficiency of Ta proA1 was higher than that of the aspartic proteases RmproA (16.8%) (Sun et al. 2018 ) and RmproB (18.8%) (Wang et al. 2021 ) but lower than that of the serine protease FgAPT4 (59.6%) (Wang et al. 2023 ) and the aspartic protease Apa1 (72%) (Wei et al. 2023 ). Generally, aspartic proteases have optimal pH and pH stability under acidic conditions (Table S4 ). The optimal pH of Ta proA1 (Fig. 4 A) is consistent with that of TlAP from Talaromyces leycettanus JCM12802 (Guo et al. 2019 ), lower than those of TAASP (pH 4.0) from Trichoderma asperellum (Yang et al. 2013 ) and RmproA (pH 5.5) from R. miehei CAU432 (Sun et al. 2018 ) but higher than those of PepAb (pH 2.5) from A. niger (Song et al. 2020 ) and RmproB (pH 2.5) from R. miehei CAU432 (Wang et al. 2021 ). The optimal temperature of Ta proA1 (Fig. 4 C) was the same as those of PepA, PepAb, and PepAc from A. niger (Song et al. 2020 ) and PepA from A. oryzae (Yue et al. 2019 ) but lower than those of TlAP (55 °C) from Talaromyces leycettanus JCM12802 (Guo et al. 2019 ) and RmproA (55 °C) from R. miehei CAU432 (Sun et al. 2018 ). Ta proA1 was stable up to 45 °C (Fig. 4 D) and retained almost all its initial activity after incubation for 30 min. The thermostability of Ta proA1 (59.2%) was higher than that of Ps APA (0%) from Penicillium sp. XT7 (Guo et al. 2021 ) and rP6281 (almost 0%) from Trichoderma harzianum (Deng et al. 2018 ) at 50 °C for 30 min. Additionally, Cu 2+ effectively enhanced the activity of aspartic proteases in other studies (Deng et al. 2018 ; Guo et al. 2019 ). SDS completely inhibited protease activity, which may be attributed to the denaturation of Ta proA1 (Sun et al. 2018 ). Most aspartic proteases exhibited the highest hydrolysis activity towards casein (Azadi et al. 2022 ; Guo et al. 2021 ; Sun et al. 2018 ; Wang et al. 2021 ). In this study, Ta proA1 showed the highest hydrolysis activity towards hemoglobin, followed by myoglobin and casein (Table 2 ). Microbial aspartic proteases preferentially cleave peptide bonds between hydrophobic or aromatic amino acid residues at the ends of protein substrates, such as Phe-Phe, Phe-Tyr, and Leu-Tyr. The hydrolysis specificity of Ta proA1 was closely related to the mode of cleavage of the substrate (Gao et al. 2018 ; Rao et al. 2011 ). Compared with other aspartic proteases, Ta proA1 showed different substrate cleavage patterns (Fig. 5 ). A mammalian aspartic protease (porcine pepsin A) showed a broad specificity of cleavage sites at L11-V12, E13-A14, A14-L15, L15-Y16, Y16-L17, F24-F25, and F25-Y26 of the oxidized insulin B chain (Rao et al. 2011 ). The specificity of Ta proA1 was similar to that of mammalian aspartic proteases. In addition, aspartic proteases have high affinity for the F24-F25 bond of the oxidized insulin B chain (Fig. 5 ), and pepsin-like aspartic proteases usually have higher substrate hydrolysis activity than chymosin-like aspartic proteases to degrade protein substrates into small peptides (Takyu et al. 2022 ).
Generally, the bioactivity of protein hydrolysates mainly depends on protein structure, the protease used, and the hydrolysis conditions. As shown in Fig. S7 , the activity of Ta proA1 decreased with the extension of hydrolysis time during the preparation of duck blood peptides. According to the IC 50 values, duck plasma protein was more suitable for efficient hydrolysis by Ta proA1 than hemoglobin to prepare bioactive peptides with high ACE inhibitory activity (Table 3 ). Microbial proteases have been widely used for the preparation of bioactive peptides. Two aspartic proteases, RmproA and RmproB, from R. miehei CAU432 were used to produce peptides with ACE inhibitory activity from turtle meat and duck hemoglobin, respectively. When the protein concentration was 1.0 mg/mL, turtle meat hydrolyzed by RmproA showed 88% ACE inhibitory activity (Sun et al. 2018 ). When the protein concentration was 0.5 mg/mL, duck hemoglobin hydrolyzed by RmproB showed 90.7% ACE inhibitory activity (Wang et al. 2021 ). In this study, when the protein concentration of the hydrolysates was 0.1 mg/mL, duck hemoglobin and plasm protein hydrolysates showed excellent ACE inhibitory activities of 52.88% and 97.88%, respectively (Fig. 6 A, B). Duck hemoglobin hydrolysate by Ta proA1 with ACE inhibitory activity has a lower IC 50 value than duck hemoglobin hydrolysate by RmproB (Wang et al. 2021 ). This result indicated that Ta proA1 showed a better effect than that of RmproB in preparing ACE inhibitory peptides from duck hemoglobin. Currently, the commercial aspartic protease pepsin has been applied to prepare bioactive peptides from blood proteins. ACE inhibitory peptides were prepared by hydrolyzing porcine hemoglobin and bovine plasma proteins with pepsin, and the IC 50 values were 1.53 mg/mL and 17.19 mg/mL, respectively (Deng et al. 2014 ; Hyun and Shin 2000 ). These results indicated that Ta proA1 has great potential in preparing peptides with ACE inhibitory activity from duck blood proteins.
In conclusion, a novel aspartic protease ( Ta proA1) from T. asperellum was successfully expressed in K. phaffii GS115. It was efficiently produced by fed-batch fermentation in a 5 L fermenter and yielded a protease activity of 4092 U/mL. Ta proA1 showed optimal activity at pH 3.0 and 50 °C, a broad substrate specificity, and the highest hydrolysis activity towards myoglobin and hemoglobin. Moreover, duck blood proteins were efficiently hydrolyzed by Ta proA1 to prepare duck blood peptides with high ACE inhibitory activity, showing IC 50 values of 0.105 mg/mL and 0.091 mg/mL for hemoglobin and plasma protein hydrolysates, respectively. The high-level expression and unique properties of Ta proA1 make it great value for the production of bioactive peptides. | Abstract
A novel aspartic protease gene ( Tapro A1) from Trichoderma asperellum was successfully expressed in Komagataella phaffii ( Pichia pastoris ). Ta proA1 showed 52.8% amino acid sequence identity with the aspartic protease PEP3 from Coccidioides posadasii C735. Ta proA1 was efficiently produced in a 5 L fermenter with a protease activity of 4092 U/mL. It exhibited optimal reaction conditions at pH 3.0 and 50 °C and was stable within pH 3.0–6.0 and at temperatures up to 45 °C. The protease exhibited broad substrate specificity with high hydrolysis activity towards myoglobin and hemoglobin. Furthermore, duck blood proteins (hemoglobin and plasma protein) were hydrolyzed by Ta proA1 to prepare bioactive peptides with high ACE inhibitory activity. The IC 50 values of hemoglobin and plasma protein hydrolysates from duck blood proteins were 0.105 mg/mL and 0.091 mg/mL, respectively. Thus, the high yield and excellent biochemical characterization of Ta proA1 presented here make it a potential candidate for the preparation of duck blood peptides.
Key points
• An aspartic protease (TaproA1) from Trichoderma asperellum was expressed in Komagataella phaffii.
• TaproA1 exhibited broad substrate specificity and the highest activity towards myoglobin and hemoglobin.
• TaproA1 has great potential for the preparation of bioactive peptides from duck blood proteins.
Supplementary Information
The online version contains supplementary material available at 10.1007/s00253-023-12848-y.
Keywords | Supplementary information
| Author contribution
YBX and ZQJ conceived and designed the research. YBX and XL conducted the experiments. QJY analyzed the data. YBX and ZQJ wrote and revised the manuscript. All authors read and approved the manuscript.
Funding
This work was financially supported by the National Natural Science Foundation of China (No. 32272913) and the National Key Research and Development Program of China (No. 2021YFC2100302).
Data availability
All data generated or analyzed in this study are included in this published article and its supplementary information files.
Declarations
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Conflict of interest
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:52 | Appl Microbiol Biotechnol. 2024 Jan 13; 108(1):1-12 | oa_package/8a/e9/PMC10787677.tar.gz |
|
PMC10787678 | 38217742 | Introduction
Recent studies have highlighted that severe respiratory viral infections such as influenza or coronavirus disease 2019 (COVID-19) pose a risk for secondary fungal infections in critically ill patients [ 1 ]. In consequence, influenza-associated pulmonary aspergillosis (IAPA) has been recognized as a new entity that affects immunocompromised as well as non-immunocompromised critically ill patients [ 2 ]. In line, a severe course of COVID-19, caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), may cause respiratory failure and admission to intensive care unit (ICU), complicated by secondary bacterial and/or fungal infections, particularly CAPA [ 3 – 5 ]. The pathogenesis of CAPA is driven by several factors. First, SARS-CoV-2 infections leads to cellular damage in the respiratory epithelium, which is usually considered the first line of defense against fungal infections [ 6 ]. Damaged epithelium is associated with reduced ciliary clearance of inhaled fungal spores and altered direct antiviral mechanisms, such as production and release of antimicrobial peptides [ 7 ]. Suppression of the type-1 interferon immune response by the viral infection may represent a key immunological mechanism predisposing these patients to develop CAPA [ 1 , 5 ]. However, severe viral infections may also cause depletion of B1a lymphocytes affecting production of anti-Aspergillus Immunoglobulin G (IgG), and thereby allowing the fungus to remain concealed from recruited lung neutrophils [ 8 , 9 ]. In addition to the direct effects caused by viral infection, adjunctive immunosuppressive/immunomodulatory treatment for the treatment of moderate to severe COVID-19 may exacerbate the risk of developing CAPA [ 5 , 10 ].
The overall burden of CAPA in critically ill COVID-19 patients is challenging to assess and wide variability of CAPA prevalence has been reported [ 11 ]. In a pan-European multicenter study, inter-center CAPA prevalence rate varied between 1.7 to 26.8%, with a median prevalence of 11% [ 12 ]. This variance in incidence rates was also observed among many other studies [ 5 ] and has several potential reasons including availability of fungal diagnostics, local epidemiology, demographics and socio-economic factors. High prevalence and mortality rates [ 13 ] of CAPA raised the question whether critically ill COVID-19 patients may benefit from application of mold-active antifungal prophylaxis, similar to other high-risk groups [ 14 – 17 ]. Additionally, diagnosis of invasive aspergillosis in the ICU is often challenging, given the often unspecific clinical and radiological presentation as well as the risks associated with invasive diagnostic procedures, even though bronchoscopy is a well-tolerated procedure in ventilated COVID-19 patients [ 18 ].
In a recently published observational data obtained from several ICUs from our center, it could be shown, that CAPA prevalence rate may significantly be reduced by application of antifungal prophylaxis [ 15 ]. However, antifungal prophylaxis was not associated with survival benefits in critical ill COVID-19 patients. Here we report our local experience with CAPA in a medical ICU, and the impact of systemic mold active prophylaxis in critically ill COVID-19 patients on CAPA development. We report this data in addition to the data reported earlier [ 15 ], as we have observed that the baseline characteristics of critically ill COVID-19 patients have changed over the time of the pandemic. We now observe a higher number of immunocompromised patients in the ICU (e.g., active malignancies) compared to the early phases of the pandemic and associated with that, higher rates of CAPA and higher mortality rates [ 19 ]. In addition, we aimed to include data on CAPA epidemiology and diagnostic work-up in a cohort of patients receiving antifungal prophylaxis in a homogenous cohort of critically ill COVID-19 patients, which has not been reported before. | Methods
Study Cohort
This is a prospective monocentric observational study performed at the University Hospital of Graz, Austria. The primary objective of this study was to describe our local experience with CAPA, and to compare the incidence of CAPA between patients who developed COVID-19 associated respiratory failure (ARF) and who received antifungal prophylaxis versus those who did not receive prophylaxis. Secondary objectives were the comparison of demographic data and outcome between the two groups.
At our institution, systemic antifungal prophylaxis with posaconazole [intravenous (i.v.) or oral tablet formulation] 300 mg twice daily at day 1, followed by 300 mg once daily was recommended for all patients with COVID-19 associated ARF for the duration of respiratory support (non-invasive or invasive) in September 2020. Alternatively, inhaled liposomal amphotericin B (lipAmB) at a dose of 12 mg thrice weekly could have been used in patients who had a contraindication for posaconazole. The decision whether or not to implement antifungal prophylaxis, however, was solely at the treating physician’s discretion and also depended on the time the local recommendations where published.
All consecutive COVID-19 patients who developed ARF and were admitted to our medical ICU between April 2020 and May 2021, were considered eligible for study inclusion. Inclusion criteria were a positive SARS-CoV-2 polymerase chain reaction (PCR) result and admission to ICU due to COVID-19 associated ARF. Exclusion criteria were age < 18 years and other reasons than COVID-19 associated ARF for ICU admission.
All included patients were classified as having proven CAPA, probable CAPA, possible CAPA or no CAPA based on the 2020 ECMM/ISHAM consensus definitions [ 20 ].
Part of the data from a part of the study cohort reported here, have been reported in earlier publications [ 12 , 13 , 15 , 21 ]. The study was approved by the local ethics committee (32–296 ex 19/20).
Statistical Analysis
For statistical analysis IBM SPSS 27 (SPSS Inc., Chicago, IL) was used. For the descriptive analysis categorical variables were displayed in absolute and relative frequencies with counts and percentages. Quantitative variables were presented as medians and quartiles or as mean plus 95% confidence interval (95% CI), as appropriate. To compare the group of patients who received antifungal prophylaxis versus those with no antifungal prophylaxis and test for statistical significance, categorical variables were tested with chi-squared test and Fisher’s exact test, respectively. All quantitative variables were analyzed for normal distribution. Quantitative variables were then tested for statistically significant difference between the two groups with Mann–Whitney-u-test or unpaired t-test, as appropriate. Survival curves of patients with and without CAPA are displayed as Kaplan–Meier curve. The impact of CAPA diagnosis on survival status of COVID-19 patients was assessed by the log-rank test. A p value < 0.05 was considered statistically significant. | Results
Study Cohort
Baseline characteristics of the study cohort are displayed in Table 1 . Seventy-seven patients were admitted to the ICU during the observational period, fulfilled inclusion criteria and were enrolled into the study. Twenty-seven patients (35.1%) were female and 50 (64.9%) had a cardiovascular disease as a risk factor for a severe COVID-19 course. Second most prevalent underlying condition was chronic lung disease in 26 patients (33.8%), followed by obesity defined as body-mass-index > 30 (27.3%), diabetes mellitus (23.4%), malignancies (15.6%), and history of smoking (14.3%). Six patients (7.8%) were recipients of a solid organ transplant. Thirty patients (39.0%) patients received non-invasive ventilation only on ICU, and 47 (61.0%) patients required invasive mechanical. Out of the 47 patients who received invasive mechanical ventilation, seven received additional extracorporeal membrane oxygenation (ECMO) treatment.
Antifungal Prophylaxis, CAPA Development and Outcome
Fifty-three patients (68.8%) received antifungal prophylaxis during their ICU stay as part of their routine management. Posaconazole was used as prophylaxis in all 53 patients, and all patients received posaconazole intravenously. As none of the patients had contraindications against the routine use of posaconazole, inhaled lipAmB was not used in this cohort during the observational period.
In the total study cohort, six patients were routinely diagnosed with CAPA. All CAPA cases were classified as probable CAPA. CAPA was only diagnosed in the non-prophylaxis group with six cases versus no case in the prophylaxis group (p < 0.001). CAPA was diagnosed at a median of 8.5 days (25–75th: 3–18.25) following ICU admission. The incidence of CAPA in the overall cohort was 0.57 events per 100 ICU days (95% CI 0.53–0.62) and 2.2 events per 100 ICU days (95% CI 2.02–2.37) in the non-prophylaxis group.
The median length stay in the ICU was 10 days (25–75th: 5–20) in the non-CAPA group versus 36.5 days (25–75th: 33–42.25) in the group of patients diagnosed with CAPA (p < 0.001). Median observation time from ICU admission to last follow-up visit for the whole study cohort was 30 days (25–75th: 15–39.5). For patients who were diagnosed with CAPA, the median follow-up time after CAPA diagnosis was 32 days (25–75th: 24.75–49.5). Thirty days after ICU admission 30 patients (56.5%) in the prophylaxis group were still alive, while 19 patients (79.2%) in the non-prophylaxis group were still alive. At ICU discharge 28 patients (52.8%) in the prophylaxis group were still alive, while 15 patients (62.5%) in the non-prophylaxis group were still alive. Out of the six CAPA patients, four were discharged alive from ICU (66.6%). In the group of non-CAPA patients, 39 out of 71 patients (54.9%) were discharged alive ( p = 0.689).
No difference in the cumulative 84-day survival for individuals who received antifungal prophylaxis versus those who did not could be observed (Fig. 1 ; p = 0.115). Median survival time was estimated with 42 days (95% CI 30.24–53.76) in the CAPA group and 42 days (95% CI 33.29–50.71) in the group of patients without developing CAPA ( p > 0.05).
Administration of systemic glucocorticosteroids did not differ significantly between both groups. Forty-six patients (86.8%) in the prophylaxis group received glucocorticosteroids versus 17 patients (70.8%) in the control group. In contrast, tocilizumab was significantly more often applied in the non-prophylaxis group with four patients versus one patient in the prophylaxis group ( p = 0.031).
CAPA Diagnosis
Bronchoscopy including BALF-GM testing was performed in 11 out of the 24 patients (45%) that did not receive antifungal prophylaxis and in 19 out of the 53 patients (36%) who received antifungal prophylaxis ( p > 0.05). In addition, baseline characteristic (e.g., immunosuppressive disease) were similar among the CAPA and non-CAPA group. All six CAPA cases had BALF-GM testing. In the group of patients without CAPA, BALF-GM was performed in 24 out of 71 patients (34%).
In four out of the six CAPA patients, there was a positive BALF-GM result with an optical density index (ODI) > 1. Two patients had a positive GM result in serum with an ODI > 0.5. Four out of six patients had a positive Aspergillus spp. specific polymerase chain reaction (PCR) result in BALF. There was one positive Aspergillus spp. lateral flow device (LFD) result in BALF in the total cohort (this patient also had a positive BALF-GM and BALF-Aspergillus PCR), one positive Aspergillus spp. culture result in BALF and one positive Aspergillus spp. culture result in tracheal aspirate. | Discussion
In this prospective monocentric cohort study in critically ill COVID-19 patients, we observed a CAPA incidence of 2.2 events per 100 ICU days in those not receiving antifungal prophylaxis, while not a single case of CAPA was observed in those with administration of mold active antifungal prophylaxis. No impact on cumulative 84 days survival, however, was observed.
Antifungal prophylaxis has been shown to significantly reduce the incidence of invasive fungal disease (IFD) in different cohorts of patients with hematological malignancies who are considered to be at high risk for IFD development [ 22 – 25 ]. Prevalence rates of IFDs of up to 25% have been reported in neutropenic patients in the pre-prophylaxis era [ 26 ]. Similar rates have been observed in other cohorts of critically ill influenza and COVID-19 patients [ 2 , 10 , 12 , 27 ]. To evaluate the impact of antifungal prophylaxis on IAPA, the randomized POSA-FLU trial has been conducted. This study investigated the safety and efficacy of posaconazole prophylaxis in critically ill influenza patients [ 28 ]. The trial showed a trend towards reduced incidence of IAPA in the posaconazole arm versus the placebo arm (5.4% versus 11.1%), but failed to prove statistical significance, because of lack of power and due to the fact that the rate of early IAPA after ICU admission (within 48 h) was higher than expected. In addition, prophylaxis was limited to a maximum of seven days, however, two cases of IAPA were diagnosed after day 7 (day 8 and 12, respectively). In contrast to this finding, we observed, that the time from ICU admission to CAPA diagnosis was longer in the cohort reported here (median 8.5 days). This is in concordance with reports from other studies [ 5 ]. The longer period may allow antifungal prophylaxis to reduce the rate of fungal infections more significantly compared to influenza, as mean C min levels of > 1000 mg/L are achieved approximately 3 days after start of intravenous posaconazole [ 29 ]. Even though, it may take several more days to reach a solid steady state (approximately 7 days), this would be sufficient to achieve solid trough levels before the majority of CAPA cases are clinically diagnosed. Also, in patients who receive ECMO, posaconazole plasma concentrations of ≥ 1 mg/L will be reached in the majority of patients within 48 h [ 30 ]. Concordant with this, we observed a significantly reduced CAPA incidence rate in the prophylaxis group. In line with another larger study from our center, evaluating a shorter timeframe but involving multiple ICUs across our hospital [ 15 ], we could not observe a significant reduction in mortality in the group of patients who received antifungal prophylaxis. This observation may be, at least partly, explained by the fact that our study was neither designed, nor aimed to detect a difference in mortality. In addition, all patients who needed ECMO treatment in our cohort received antifungal prophylaxis, whereas no patient in the non-prophylaxis group was treated with ECMO. Taken together with the finding, that 30-days after ICU admission, more patients in the non-prophylaxis group were alive compared to the prophylaxis group, one may hypothesize, that patients in the prophylaxis group may had more severe disease and a poorer prognosis, regardless of the application of antifungal prophylaxis. In addition, the fact that there was no survival benefit for the prophylaxis group may, at least partly, be explained by the pathophysiological hallmarks of CAPA. In contrast to invasive aspergillosis in severely neutropenic patients and also observed in critically ill influenza patients, CAPA does not primarily cause angio-invasive infection [ 31 ]. Angio-invasion in CAPA is usually observed at later stages of the disease. In general, we observed that CAPA diagnosis is made at a median of 8.5 days after ICU admission. This implicates, that patients who die earlier from COVID-19 in ICUs cannot develop CAPA. This is potentially causing a bias towards a higher mortality rate in the non-CAPA cohort. As all CAPA cases occurred in the non-prophylaxis group, this, however, may partly explain the fact that we could not observe a significant survival benefit between the two groups.
Thereby Kaplan–Meier analyses starting at the day of ICU admission are biased by the fact that the control arm includes all those who are more severely ill and have a fatal outcome before they can develop CAPA. In contrast, multiple large studies have shown that mycological evidence for CAPA per se, like positive BAL GM or BAL culture and especially positive serum GM are associated with significantly higher mortality rates often exceeding 80% [ 21 , 31 , 32 ].
Different reasons may explain the pathophysiological differences between IAPA and CAPA, including the lack of neutropenia in many patients with COVID-19 in ICUs as well as the different pathophysiology of COVID-19 compared to influenza. Infection with influenza virus, for example, does cause severe lytic infections in the respiratory epithelium and therefore mitigate early invasive growth of Aspergillus [ 6 ]. In addition, influenza has shown to affect some defense mechanisms against pulmonary infections like the NADPH-depended production of reactive oxygen species in macrophages and neutrophils [ 33 ]. However, application of immunomodulatory drugs including glucocorticosteroids or anti-IL-6 treatment is not standard of care for severe influenza but for severe COVID-19, which may contribute to the elevated risk of pulmonary aspergillosis in critically ill COVID-19 patients. In our cohort, there was only a relatively small number ( N = 5) of subjects that received tocilizumab, however, majority of these patients ( n = 4) were in the non-prophylaxis group. As tocilizumab treatment is considered an independent risk factor for the development of CAPA [ 12 ], this may partly contributed to the higher CAPA incidence in the non-prophylaxis group in this study. In general, we do not exactly know the reasons why antifungal prophylaxis was withhold in some patients. One may only speculate that, especially in the early phase of the pandemic, the burden of CAPA and risk factors for CAPA development had not been clearly identified. For some physicians, the risk–benefit ratio may therefore be difficult to establish, considering the potential side effects of antifungals in the ICU and the unclear benefit–especially in terms of overall outcome. Some physicians therefore may have favored a pre-emptive strategy, even though we know now, that screening for CAPA is difficult based on the limited sensitivity of blood biomarkers and the need for invasive procedures like bronchoscopies.
In this study, BALF-GM was the main mycological diagnostic criterium, however, also serum-GM turned out positive in two out of the six CAPA patients indicating angio-invasive disease. Next generation sequencing of plasma samples may overcome some of the limitations of conventional blood GM testing and showed promising results for CAPA diagnosis in a subgroup of CAPA patients reported here [ 34 ].
The influence of different SARS-CoV-2 strains on the epidemiology of CAPA and consequently on the management strategies, including antifungal prophylaxis, is not fully understood yet. CAPA incidence rates may also differ with the predominant SARS-CoV-2 strains [ 35 ], as observed for influenza [ 36 ]. In this study, we covered several COVID-19 waves and observed CAPA cases in all of them, including one CAPA case that was diagnosed in the period (before September 2020) where there was no local recommendation for antifungal prophylaxis in critically ill COVID-19 patients. Whether future variants of SARS-CoV-2 or adaptions in the COVID-19 management will affect the epidemiology of CAPA needs to be closely observed, as this may also affect the strategies for CAPA management and prevention.
Based on currently available data, no recommendation can be given for or against the general use of antifungal prophylaxis in critically ill COVID-19 patients. This is also influenced by several factors like the wide variation on local epidemiology, the use of (combination) immunomodulatory treatment, individual risk factors like underlying immunosuppressive disease, potential transient risk factor like construction work, and draw backs of azole usage in the ICU like drug-drug interactions or toxicities. Besides antifungal prophylaxis, however, fungal awareness is key for early diagnosis and treatment. This is critical, as it is well-known, that true fungal infections in COVID-19 patients are associated with reduced probability of survival [ 12 , 37 ].
This report highlighted the local experience with application of antifungal prophylaxis in critically ill COVID-19 patients. As this was a non-interventional observational trial, it does come with some important limitations that should be considered: First, the uncontrolled study design does not allow for equal distribution of risk factors for CAPA development or poor outcome among the two groups. As some variables including APACHE-II score, SOFA score or details on EORTC/MSGERC risk factors for fungal infections were not available, it cannot be excluded, that baseline characteristics were distributed unequally among the two groups. Second, as no systemic antifungal screening protocol was implemented at our center, the decision whether or not to perform antifungal diagnostics was solely based on the treating physician’s discretion. This may cause over- or underdiagnoses of CAPA in one of the groups. Third, we did not observe a biopsy proven CAPA case, which is due to the fact, that lung biopsy in critically ill COVID-19 patients is usually not possible. Lastly, the study was not designed to investigate a survival benefit of antifungal prophylaxis in this cohort. In addition, a cox regression model couldn’t be performed to implement CAPA as a time-dependent variable and avoid immortality bias due to the small sample size of CAPA cases and not fulfilling the assumption of proportional hazards.
In conclusion, in this observational cohort we found that the application of mold active antifungal prophylaxis in critically ill COVID-19 patients was associated with significantly reduced number of CAPA cases, while we could not observe an effect on overall survival. | Handling Editor: Jannik Stemler.
Early after the beginning of the coronavirus disease 2019 (COVID-19)-pandemic, it was observed that critically ill patients in the intensive care unit (ICU) were susceptible to developing secondary fungal infections, particularly COVID-19 associated pulmonary aspergillosis (CAPA). Here we report our local experience on the impact of mold active antifungal prophylaxis on CAPA occurrence in critically ill COVID-19 patients. This is a monocentric, prospective cohort study including all consecutive patients with COVID-19 associated acute respiratory failure who were admitted to our local medical ICU. Based on the treating physician’s discretion, patients may have received antifungal prophylaxis or not. All patients were retrospectively characterized as having CAPA according to the 2020 ECMM/ISHAM consensus definitions. Seventy-seven patients were admitted to our medical ICU during April 2020 and May 2021 and included in the study. The majority of patients received invasive-mechanical ventilation (61%). Fifty-three patients (68.8%) received posaconazole prophylaxis. Six cases of probable CAPA were diagnosed within clinical routine management. All six cases were diagnosed in the non-prophylaxis group. The incidence of CAPA in the overall study cohort was 0.57 events per 100 ICU days and 2.20 events per 100 ICU days in the non-prophylaxis group. No difference of cumulative 84-days survival could be observed between the two groups ( p = 0.115). In this monocentric cohort, application of posaconazole prophylaxis in patients with COVID-19 associated respiratory failure did significantly reduce the rate of CAPA.
Supplementary Information
The online version contains supplementary material available at 10.1007/s11046-023-00809-y.
Keywords
Open access funding provided by Medical University of Graz. | Supplementary Information
Below is the link to the electronic supplementary material. | Author Contributions
JP and MH conceptualized the study. JF, ACR, MH, and JP contributed to data collection and analysis. MG contributed to methodology. JF and JP drafted the manuscript. MG, MH, ACR and PE critically revised the manuscript. All authors reviewed and approved the final version of the manuscript.
Funding
Open access funding provided by Medical University of Graz.
Declarations
Conflict of interest
M.H. received research funding from Gilead, Astellas, MSD, Euroimmune, Scynexis, F2G and Pfizer, outside of the submitted work. J.P. has received speakers’ fees from Gilead Sciences, Pfizer, Swedish Orphan Biovitrum, Associated of Cape Cod, served at advisor boards for Gilead Sciences and Pfizer and holds stocks of Novo Nordisk and AbbVie Inc–all outside of the submitted work.
Ethical Approval
The study was approved by the local ethics committee (32–296 ex 19/20).
Consent to Participate
According to our local ethics committee regulations, informed consent form was obtained from all patients, whenever possible.
Consent to Publications
Not applicable. | CC BY | no | 2024-01-15 23:41:52 | Mycopathologia. 2024 Jan 13; 189(1):3 | oa_package/9c/44/PMC10787678.tar.gz |
|
PMC10787679 | 37847456 | Introduction
While considered an established marker of cell proliferation for decades, the potential role for Ki67 immunohistochemistry (IHC) in breast cancer management has remained unclear due largely to its high inter-observer variability and the lack of established cutoff points for clinical decisions [ 1 ]. Nonetheless, Ki67 has been used in several clinical trials (e.g., POETIC) [ 2 ]. In 2021, it was FDA approved as a companion diagnostic for selection of high-risk patients for treatment with CDK4/6 inhibitors, although inter-observer reproducibility remained a concern [ 3 ] and in 2023, FDA removed the Ki-67 testing requirement [ 4 ].
In 2021, the Ki67 International Working Group (IKWG) published updated recommendations for standardizing the visual assessment of Ki67 IHC in breast tissue [ 5 ]. In addition to scoring methods, recommendations included that breast cancer samples for Ki67 testing be processed in line with American Society of Clinical Oncology and the College of American Pathologists (ASCO/CAP) guidelines for HER2 and hormone receptors, and that they ideally be tested on core needle biopsies since this minimizes fixation problems that can impact analytical validity. When following their analytic and scoring guidelines, the IKWG concluded that Ki67 IHC cut points of ≤ 5% and ≥ 30% have sufficient clinical utility for patients with ER+HER2− stage I/II breast cancer and can be used to identify patients who can avoid or proceed with chemotherapy.
Multiple studies [ 6 – 11 ] have examined the extent to which Ki67 IHC correlates with the 21-gene breast cancer recurrence score (RS) assay, which is included in NCCN and ASCO guidelines for selecting HR+HER2− breast cancer patients for chemotherapy [ 12 , 13 ]. There also has been significant interest in whether Ki67 IHC alone or together with other IHC markers or clinical factors could be used as a cost-effective surrogate for the 21-gene assay or to identify a subset of patients who could avoid this or other multigene tests [ 14 , 15 ]. However, these studies have used various scoring methods and cut points for Ki67 IHC, have typically used surgical specimens for both Ki67 and the 21-gene assay, and have often not been restricted to patients intended for the recent IKWG recommendation (i.e., patients with ER+HER2− stage I/II breast cancer).
The goal of our study was to follow the recent IKWG’s visual assessment guidelines and examine Ki67 IHC scoring reproducibility in a real-world setting. We also evaluated whether Ki67 IHC cut-off points (≤ 5%, ≥ 30%) could accurately identify patients with either low 21-gene RS (< 26) or high RS (≥ 26) among a clinically low-risk group of early-stage breast cancer patients, who we defined as women aged 50 + years diagnosed with ER+PR+HER2−, node-negative disease. In addition, we examined Ki67 scoring results done by image analysis (IA) and Ki67 cut points used in other studies and patient populations. | Study design and methods
Setting and source population
The study was conducted within Kaiser Permanente Northern California (KPNC), an integrated health care system providing comprehensive primary and specialty care to approximately 4.4 million members at 21 hospitals. Approximately 3500 enrollees are diagnosed with a new invasive breast cancer each year. Following ASCO/CAP guidelines, all diagnostic biopsies are processed, and slides undergo IHC staining for ER, PR, and HER2 at local pathology departments and are sent to a central IHC laboratory for scoring. Ki67 is not routinely done in breast cancer because Ki67 has not been clinically relevant in the treatment decisions in KPNC. The KPNC pathology departments and laboratories are certified under the Clinical Laboratory Improvement Amendments (CLIA) of 1988. During the study period, all breast IHC scoring was done by one of three pathologists specializing in semiquantitative IHC. Patients with ER+HER2− disease may have surgical specimens sent to Exact Sciences for testing by the 21-gene recurrence score assay to guide chemotherapy treatment decisions, as recommended by NCCN [ 13 ]. From January 2018 through December 2020, approximately 3000 patients aged 50+ years had testing by the 21-gene recurrence score assay. Other multi-gene tests for chemotherapy decisions (e.g., Mammoprint) are rarely ordered within KPNC. The study was approved by the KPNC IRB; patient consent was waived.
Study patients
We included a simple random sample of all women aged 50+ years diagnosed from 2018 to 2020 with lymph node-negative, ER+PR+HER2− invasive breast cancer and whose tumors (from surgery) had undergone testing by the 21- gene recurrence score assay. The sample size was based on both resource constraints and on the hypothesized number needed to address aims ( n ~ 300). Of the 320 patients we initially randomly selected, tissue was unavailable on 13, leaving a final study population of 307 patients.
Ki67 staining, training, and scoring
We followed the recommendations of the IKWG for studies of Ki67 with respect to type (core biopsy) and age (< 5 years) of tissue specimen and visual scoring methods [ 5 ]. Archived blocks with core biopsy specimens that had been used for the original IHC testing for ER, PR, and HER2 were identified and retrieved. The blocks were sent to NeoGenomics Laboratories, Inc for IHC staining for Ki67 (Dako clone MIB1), slides scanned at 20× using Leica Systems Aperio AT2 scanner, and scoring performed by image analysis (IA) algorithm validated at NeoGenomics Laboratories for clinical use, using the hot spot counting method of 500–1000 cells [ 16 ]. Image analysis performed using 40× digital probe set in at least 3 areas of hot spots for Ki-67 values as percentage positive cells and intensity by two pathologists (authors TKL and WG). This laboratory is CLIA certified to perform high complexity clinical laboratory testing. The study also included two KPNC immunopathologists (authors RJB and WJ) who each scored 1400–3000 breast cases per year from 2018 to 2020. Both immunopathologists underwent the web-based IKWG calibration training (see http://www.gpec.ubc.ca/calibrator ) and independently visually scored all slides using the global counting method (400 cells), blinded to each other and to readings by IA. Weighted and unweighted Ki67 scores were generated. This calibrator is publicly accessible at http://www.gpec.ubc.ca/calibrator . The detailed scoring protocol was found in Supplementary Document: Instructions for Ki-67 Reproducibility Study Phase 3: Core Biopsies.
Analytic methods
We compared Ki67 scores and examined inter-rater reliability across pathologists using intraclass correlation (ICC) and Kappa statistics. We also determined the percent of patients with low Ki67 scores (≤ 5) by each pathologist and by IA who also had low RS (< 26) and the percent with Ki67 scores ≥ 30 who also had high RS (≥ 26). In secondary analyses, we examined other Ki67 cut points and subgroups of patients based on a combination of PR scores and/or tumor grade. | Results
Selected characteristics of the study population are provided in Table 1 . In this low-risk population of breast cancer patients, there were 40 patients (or 13%) with a high (≥ 26) RS. The percent of women with high RS was similar across racial/ethnic groups, but a higher percent of younger women, and women with larger and higher-grade tumors had RS ≥ 26. In addition, a substantially higher percent of women with low ER or PR (i.e., 1–9% staining) had RS ≥ 26.
A comparison of weighted Ki67 scores, using the global counting method, for the two pathologists is presented in Fig. 1 . The ICC for Ki67 scores (log-transformed) by the two pathologists was 0.82 (95% CI 0.78–0.85); using cut points of ≤ 5, 6–29, 30+, the Kappa was 0.67 (95% CI 0.56–0.78). Using the IKWG guidelines, the scoring by pathologists took an average of 9–13 min per case. The ICC for IA vs pathologist 1 was 0.79 (95% CI 0.74–0.83); it was 0.76 (95% CI 0.71–0.80) for IA vs pathologist 2.
A comparison of reader scores (pathologist 1, pathologist 2, and image analysis) for Ki67 and 21 gene recurrence scores are presented in Table 2 . Depending on the reader, 8.8–16.0% of our cohort had Ki67 ≤ 5% and 11.4–22.5% had scores ≥ 30%. Among patients with Ki67 scores ≤ 5% by pathologist 1 ( n = 49, 16.0%), pathologist 2 ( n = 27, 8.8%), or IA ( n = 33, 10.7%), the percentages with RS < 26 were 91.8%, 92.6%, and 90.9%, for pathologist 1, pathologist 2, and IA, respectively. Among patients with Ki67 scores ≥ 30 by pathologist 1 ( n = 41, 13.4%), pathologist 2 ( n = 35, 11.4%), or IA ( n = 69, 22.5%), the percent who had a RS ≥ 26 was 41.5% for pathologist 1, 51.4% for pathologist 2, and 27.5% for IA.
Secondary analyses
Since other studies have used different Ki67 cut points, we also present results for cut points at 10% and 20%. Among patients with Ki67 scores of < 10% by pathologist 1 ( n = 120), pathologist 2 ( n = 111), or IA ( n = 81), the percent with a RS of < 26 was 95.8% for pathologist 1, 94.6% for pathologist 2, and 95.1% for IA. Among patients with Ki67 scores of < 20% by pathologist 1 ( n = 217), pathologist 2 ( n = 218), or IA ( n = 176), the percent with a RS of < 26 was 93.1% for pathologist 1, 92.7% for pathologist 2, and 92.1% for IA.
When analyses were restricted to patient subgroups based on tumor characteristics, we found that the percent of patients with Ki67 < 10% who also had RS < 26 was slightly higher among patients with PR > 10%; it was 98.2% for pathologist 1, 97.0% for pathologist 2, and 96.0% for IA. Among patients with low grade tumors < 2 cm the percent of patients with Ki67 < 10% who also had RS < 26 was 96.9% for pathologist 1, 94.9% for pathologist 2, and 97.7% for IA. When we used tumor grade to identify patients with low or high RS, we found that 97.2% of low-grade tumors had RS < 26. This increased to 99.1% when we restricted patients to PR > 10% staining. | Discussion
In our study of early-stage breast cancer patients with favorable prognosis—women aged 50 years or older, with node-negative disease and ER positive, PR positive, HER2− tumors—we found that visual assessment of Ki67 IHC after undergoing IKWG training was moderately to strongly reproducible across two IHC pathologists (ICC = 0.82), with 8.8–16.0% having Ki67 scores ≤ 5% and 11.4–13.4% having scores ≥ 30%. In addition, we found that > 90% of patients with Ki67 scores ≤ 5% also had RS < 26. However, a large percent (50–70%) of those with Ki67 scores ≥ 30% by visual scoring also had RS < 26. Ki67 scoring concordance with RS was fairly similar across pathologists and with scoring by IA.
When following IKWG visual scoring guidelines, the recommended protocol was time consuming and challenging for our IHC pathologists who practice in a large integrated health care system with a centralized breast IHC service and high patient volume. Using the global counting method, the inter-rater reproducibility (ICC = 0.82) of our pathologists was slightly lower than that reported by the IKWG (ICC = 0.87) [ 5 ]. It is possible that with more practice, visual Ki67 reading might become more reproducible and faster, but the necessary time for each slide is likely too long in a high-volume setting or even for lower volume settings. Automated scoring using digital image analysis may be able to address this issue. The IKWG found high reproducibility of IA scoring for Ki67 IHC across laboratories using the same scanner (ICC = 0.89), although reproducibility was lower across sites using 10 different software platforms using 7 different scanners (ICC = 0.83) [ 5 , 17 ]. When comparing scores from IA with visual scores, they observed slightly better concordance using the global (average of fields) vs. hot spot (maximum field) scoring method. In our study, scoring by IA used only the hot spot method while visual scoring by our immunopathologists only used the global method, as recommended by the IKWG. Thus, we were unable to examine whether the concordance for our immunopathologists vs. IA varied by global vs. hot spot method.
In their updated recommendations, the IKWG has concluded that visual scoring of Ki67 IHC could be used for ER+HER− stage I/II patients using ≤ 5% and ≥ 30% as clinical cut points such that results below and above these thresholds could be used to withhold or proceed with chemotherapy, respectively, without the need for more expensive multi-gene assays, such as the 21-gene RS [ 5 ]. However, they indicate this requires using a highly analytically validated assay and scoring system. Recent ASCO guidelines also suggest that these Ki67 cut points may be used in this clinical setting when patients do not have access to multi-gene assays [ 12 ]. Several clinical trials have used other cut points, but for different indicated uses. For example, POETIC used Ki67 < 10% in a study of neoadjuvant endocrine therapy [ 2 ], and MonarchE used Ki67 < 20% in a study of CDK4/5 inhibitors [ 18 ]. However, concerns regarding lack of standardized scoring and inter-rater and inter-laboratory reproducibility apply in these settings, as well [ 3 ].
There are examples of testing pathways for multi-gene assays in early-stage breast cancer. In the UK, the National Institute for Health and Care Excellence (NICE) recommends a two-stage strategy for testing ER+HER2−, node-negative breast cancer patients with a multi-gene assay [ 14 ]. First, a validated tool such as PREDICT or the Nottingham Prognostic Index is used to classify patients into low, intermediate, or high risk. Those with low-risk disease are unlikely to benefit from adjuvant chemotherapy. For those classified as intermediate risk, the multigene assays RS, EndoPredict, or Prosigna are recommended as options for guiding adjuvant chemotherapy decisions.
Our goal was to develop a simple testing pathway for RS using only Ki67 results but among a slightly more targeted population of early breast cancers with good prognosis (i.e., PR+ and age 50+ years as well as ER+HER−). While in our study over 90% of patients with Ki67 ≤ 5% had low RS, this was a very small subset (< 20%) of all the patients with low RS. Further, a substantial proportion of patients with Ki67 ≥ 30% also had low RS. Thus, using the recommended IKWG cut points for Ki67, even in a very targeted population, appears unlikely to be a very clinically useful testing pathway for multigene assays. Interestingly, in our study, concordance with low RS was slightly higher when we used the cutoff point of Ki67 < 10% or Ki67 < 20% and both of these cut points identified a substantially larger subset of patients with low RS than the ≤ 5% cut point. Thus, finding the most appropriate Ki67 cut points for a clinical decision may need further study and may continue to differ for different patient populations or treatment decisions.
We found that further restricting the study population to patients with PR > 10% staining marginally improved the correlation between Ki67 ≤ 5% and low RS. These findings are consistent with other studies that have used PR and Ki67 together to identify a subset of patients with good prognosis [ 19 , 20 ]. When examining concordance of other tumor factors with RS, low tumor grade was the most concordant with low RS. This is also consistent with other studies (e.g., Paik [ 15 ]).
While we are unaware of other studies that have compared Ki67 IHC results following the IKWG guidelines (i.e., slides from core biopsy, training of pathologists for use of standardized scoring methods) with RS results from testing conducted by Genomic Health (recently ExactSciences), a number of other studies have compared Ki67 IHC results with RS results [ 6 – 11 , 15 , 21 ]. As in our study, others have observed discordance between Ki67 IHC and RS results based on various cut points. Some also have found that concordance improves when restricting patients to those with higher PR scores (e.g., Gluz [ 21 ]) or low-grade tumors (e.g., Paik [ 15 ]). In addition to discordance between Ki67 IHC and the RS, substantial discordance has been reported for comparisons across multigene testing [ 15 ], which has been accepted by the clinical community. Until there are studies of Ki67 IHC using IKWG guidelines that examine prognosis or response to chemotherapy as endpoints, concordance with RS or other multi-gene assays may be reasonable surrogates. However, it is unclear what amount of discordance between Ki67 IHC and these multigene assays would be acceptable to the clinical community.
Although we followed IKWG recommendations to use biopsy specimens for Ki67 IHC, which is also routine clinical practice for other IHC markers, this may have in part contributed to some of our observed discordance with RS, since RS was done on surgical specimens, as is typical in clinical practice. Studies have shown that intratumoral heterogeneity is common in breast cancer [ 22 – 24 ]. A recent study found Ki67 heterogeneity in 18% of sampled breast cancers [ 25 ]. While we considered RS to be the gold standard in our study, our findings suggest that Ki67 concordance with RS may vary by clinical factors such as tumor grade. Interestingly, it appears that combining clinical factors (age, tumor size and grade) with RS provides better prognostic information for treatment decisions than RS alone [ 26 ]. This would likely be true for Ki67 IHC, as well.
Our study has several strengths and weaknesses. To our knowledge, our study is one of the first to compare Ki67 scoring following IKWG guidelines with results from the 21-gene Recurrence Score assay, a currently NCCN recommended diagnostic for selecting patients for chemotherapy. While our health care and high-volume pathology system may not be generalizable to all settings, our pathology departments follow ASCO/CAP guidelines for processing specimens and our clinicians follow NCCN and other professional guidelines for patient management. We used the biopsy specimen for Ki67, as is recommended by IKWG, and had pathologists undergo IKWG web-based training and follow its scoring guidelines. We also examined multiple Ki67 cut points for their concordance with low RS (< 26) and we explored whether concordance might be improved among a more restricted patient population. Our eligible study population was restricted to patients who had testing by the 21-gene assay and did not include all women with ER+PR+HER2−, node-negative disease diagnosed at 50+ years. If those tested differed from those untested by factors we did not access (i.e., factors other than age, ER, PR, HER2, nodal status, grade) and these factors are also related to the concordance of Ki67 scores and RS, our findings may be biased. Another limitation is that we only examined reproducibility across two pathologists and results might differ if other pathologists had been included. However, these pathologists do the majority of our IHC scoring in our health care system. We also did not examine reproducibility of Ki67 scores across multiple IA platforms or laboratories, but the laboratory we used (NeoGenomics) is CLIA certified.
In summary, we found IKWG’s recommendation on Ki67 IHC visual scoring challenging in our real-world, high-volume setting, even with very experienced IHC pathologists and where IHC reading is centralized. Visual scoring results were fairly similar to results by IA and future studies will be needed to determine the extent to which inter-rater and inter-laboratory reproducibility can be maintained or improved by IA, which would also reduce pathologist time. However, the IKWG cut point of Ki67 ≤ 5% was only able to identify a small proportion of patients who could avoid RS testing based on recent IKWG recommendations. Future studies will be needed to determine whether using a higher Ki67 cut point, as well as including additional tumor features, such as PR > 10% or low tumor grade, could increase concordance and also identify a greater proportion of patients who could avoid RS testing. In the absence of prospective trials of Ki67 in ER+HER− stage I/II patients, studies will also be needed to determine the amount of discordance with RS or other multigene assay surrogates that would be acceptable to clinicians when trying to develop a testing pathway. | Purpose
The International Ki67 Working Group (IKWG) has developed training for immunohistochemistry (IHC) scoring reproducibility and recommends cut points of ≤ 5% and ≥ 30% for prognosis in ER+, HER2−, stage I/II breast cancer. We examined scoring reproducibility following IKWG training and evaluated these cut points for selecting patients for further testing with the 21-gene Recurrence Score (RS) assay.
Methods
We included 307 women aged 50+ years with node-negative, ER+PR+HER2− breast cancer and with available RS results. Slides from the diagnostic biopsy were stained for Ki67 and scored using digital image analysis (IA). Two IHC pathologists underwent IKWG training and visually scored slides, blinded to each other and IA readings. Interobserver reproducibility was examined using intraclass correlation (ICC) and Kappa statistics.
Results
Depending on reader, 8.8–16.0% of our cohort had Ki67 ≤ 5% and 11.4–22.5% had scores ≥ 30%. The ICC for Ki67 scores by the two pathologists was 0.82 (95% CI 0.78–0.85); it was 0.79 (95% CI 0.74–0.83) for pathologist 1 and IA and 0.76 (95% CI 0.71–0.80) for pathologist 2 and IA. For Ki67 scores ≤ 5%, the percentages with RS < 26 were 92.6%, 91.8%, and 90.9% for pathologist 1, pathologist 2, and IA, respectively. For Ki67 scores ≥ 30%, the percentages with RS ≥ 26 were 41.5%, 51.4%, and 27.5%, respectively.
Conclusion
The IKWG’s Ki67 training resulted in moderate to strong reproducibility across readers but cut points had only moderate overlap with RS cut points, especially for Ki67 ≥ 30% and RS ≥ 26; thus, their clinical utility for a 21-gene assay testing pathway remains unclear.
Keywords | Author contributions
Conceptualization: VCS and LAH. Data collection and processing: VCS, RJB, WJ, RP, SSA, TKL, WG, NA, MV, LAH. Study design and statistical analysis: VCS, NA, CL, LAH. Supervision: VCS and LAH. Writing—original draft: VCS and LAH. All authors contributed to the interpretation of results and critical review of the manuscript.
Funding
This work was supported by The Permanente Medical Group (TPMG) Delivery Science Program.
Data availability
The datasets generated during the current study are not publically available but are available from the corresponding author on reasonable request.
Declarations
Competing interests
The authors RP, SSA, TKL and WG are employed by Neogenomics Laboratories, Inc. Author LAH is a co-investigator on an unrestricted grant from Exact Sciences to Kaiser Foundation Research Institute. The authors report no other potential conflicts.
Ethical approval
This is an observational study. The Kaiser Permanente Northern California Research IRB has confirmed that no ethical approval is required.
Consent to participate
The study was approved by the Kaiser Permanente Northern California IRB; patient consent was waived. | CC BY | no | 2024-01-15 23:41:52 | Breast Cancer Res Treat. 2024 Oct 17; 203(2):281-289 | oa_package/a7/ab/PMC10787679.tar.gz |
||
PMC10787680 | 37747615 | Introduction
Colorectal cancer is the third most diagnosed cancer and is responsible for 11% of cancer deaths in Australia [ 1 ], and other developed countries [ 2 , 3 ]. Colorectal cancer (CRC) has a five-year survival of around 70% which is mainly attributable to finding cancer in its early stages [ 4 , 5 ]. This is achieved with screening programs, such as with colonoscopy, or through utilising faecal occult blood tests (FOBT) followed by diagnostic colonoscopy for those returning a positive screening test result [ 6 – 8 ]. In addition, regular surveillance colonoscopy for individuals deemed at elevated risk (those with a previous neoplastic lesion or a significant family history of colorectal cancer) reduces the incidence of and mortality of CRC [ 9 , 10 ]. People at elevated risk for CRC are generally recommended to undergo colonoscopy every three to five years [ 11 ]. With the increasing number of colonoscopy procedures worldwide, attention must be paid to the delivery of care, which can be informed by the assessment of patient-reported outcomes.
Screening and surveillance colonoscopy reduces mortality and the incidence of CRC through adenoma removal [ 12 – 14 ]; however, the colonoscopy procedure is associated with discomfort, pain and a risk of adverse events such as perforation [ 15 ]. It is also argued that knowing one’s results after a colonoscopy may be associated with a certain degree of anxiety depending on the nature of the results [ 16 – 18 ]. As such, both the procedure and diagnostic results may have an impact on health-related quality of life (HRQoL). Several of these studies reporting the impact of CRC screening on patient-reported outcomes such as HRQoL have applied generic measures such as the SF-36 [ 18 ], or non-validated scales specifically designed for the studies [ 16 ]. Studies assessing HRQoL following a diagnosis of cancer show that generic measures may not be sensitive to changes in HRQoL outcomes in these populations [ 19 – 21 ]. Yet while the number of cancer-specific measures and studies applying these tools/measures in individuals with cancer has increased [ 22 , 23 ], there is a paucity of research investigating their use to assess changes in HRQoL in people undergoing diagnostic (following a positive FOBT or symptoms) and surveillance colonoscopy for CRC, as well as limited studies comparing them to generic measures.
This study, therefore, assessed HRQoL for individuals undergoing diagnostic or surveillance colonoscopy for CRC, using both generic and cancer-specific measures. The aim was to assess the sensitivity and discriminant validity of two multi-attribute utility measures (MAUI), the generic EQ-5D-5L and cancer-specific EORTC Quality of Life Utility Measure-Core 10 dimensions (QLU-C10D) for individuals undergoing colonoscopy for different indications related to CRC detection. Multi-attribute utility measures are used to assess the quality of life and generate utility estimates for the calculation of quality-adjusted life years (QALYs), the outcome measure required for cost-utility analysis (CUA). With the increasing use of CUA in the assessment of health interventions, it is important to determine the appropriate instrument/scale for a given population to inform the accuracy of utility and cost-effectiveness results. | Methods
Study population
This was a prospective study of an Australian population who had recently undergone a colonoscopy in a public hospital setting (Flinders Medical Centre or Noarlunga Hospital, South Australia). We reviewed clinical records to identify patients aged ≥ 40 years who had a recent colonoscopy and invited them into the study. Individuals were excluded if they had prior treatment for CRC or had a pre-existing and ongoing bowel condition that required medication or was the indication for the colonoscopy (such as inflammatory bowel disease).
The survey was mailed out approximately 14 days after the colonoscopy. This included study information, a consent form and the HRQoL scales, first the generic EQ-5D-5L followed by the cancer-specific scale. For participants who responded to the first survey, a repeat survey was sent one year later. A reminder phone call was made if the survey had not been completed and returned within two weeks.
Clinical measures
All study invitees underwent either a diagnostic colonoscopy to investigate the cause of symptoms, a follow-up after a positive FOBT screening test, or a surveillance colonoscopy due to an elevated risk for CRC [ 10 , 11 ]. Colonoscopy findings were reviewed, and diagnosis was classified based on whether any type of polyp was removed, or whether colorectal cancer was diagnosed (divided into early stage (I and II) and advanced stage (III and IV)). Polyps were not divided into subclasses (e.g., advanced or non-advanced adenomas, sessile-serrated lesions, benign polyps) as it was felt that this level of discrimination would not be appropriate for most individuals. Patients’ pathology knowledge is often limited to whether anything was found and removed at colonoscopy, and whether it was cancer [ 24 , 25 ].
Health-related quality of life
The survey collected information on participant demographics (including age, gender, marital status, socioeconomic status (based on the Socio-Economic Indexes for Areas (SEIFA) score), work status, having private health insurance and education level), health status (including having a disability, comorbidities and previous history of surgery and cancer) and HRQoL assessment.
HRQoL was assessed using the EQ-5D-5L [ 26 ], which is a generic multi-attribute utility instrument and the cancer-specific EORTC QLQ-C30 [ 27 ]. The use of a cancer-specific scale was considered appropriate in this population (both diagnostic and surveillance colonoscopies) as studies have shown that these individuals can be fearful of a cancer diagnosis [ 28 – 30 ].
The generic EQ-5D-5L has five dimensions: mobility, self-care, usual activities, pain/discomfort and anxiety/depression with responses across five levels; no problems, slight problems, moderate problems, severe problems and extreme problems [ 26 ]. As recommended by the UK National Institute of Care Excellence (NICE) utility scores for EQ-5D-5L were generated using the EQ-5D-5L crosswalk tariff developed from a general population sample in the UK [ 31 ].
The cancer-specific EORTC-QLQ-C30 has one global HRQoL scale, five functional scales (physical, role, emotional, cognitive, social), three symptom scales (fatigue, nausea or vomiting, pain) and six single items (sleeping disorders, appetite loss, dyspnoea, diarrhoea, constipation and financial problems). Each item has four alternative responses (1—not at all; 2—a little; 3—quite a bit; 4—very much) [ 27 ]. The responses were mapped onto the QLU-C10D, a utility scoring algorithm developed by King et al. to generate utility scores [ 32 ]. The QLU-C10D has four functional scales and six symptom scales, each with four levels: not at all, a little, quite a bit, and very much. The functional scales are physical function, role function, social function and emotional functioning while the symptom scales are pain, fatigue, sleep, appetite, nausea and bowel problems. The value set for the QLU-C10D was based on an Australian general population sample with theoretical utility scores ranging from -0.095 to 1 [ 33 ].
Data analysis
Data were analysed using Stata version 15 (StataCorp, College Station, TX, USA). Participant characteristics were summarised as means and standard deviations (SD) for continuous variables and absolute numbers and percentages for categorical variables.
Health-related quality of life
Descriptive statistics including means, medians and ranges were compared for each instrument at baseline (immediately after colonoscopy) and during follow-up (one year after colonoscopy). Differences in HRQoL between the two time points were explored using Wilcoxon–Mann–Whitney sign rank test.
Instrument sensitivity
Lower ceiling effects suggest greater sensitivity and discriminant ability of an instrument. The ceiling effect occurs when the highest possible level of a dimension or score of an instrument or measure is achieved in more than 15% of respondents [ 34 ]. The ceiling effect for EQ-5D-5L was calculated as the proportion of ‘no problem’ responses in each dimension and the proportion of ‘no problem’ in all dimensions. QLU-C10D ceiling effects were calculated as the proportion of level 1 (highest level) on each dimension and all dimensions. Ceiling effects were further explored by examining those reporting full health in one instrument to assess what was reported in the other instrument.
Discriminant validity
Discriminant validity is an instrument's ability to measure expected differences between subgroups of patients [ 35 , 36 ]. Means and SDs were compared between the different indications for and diagnoses at colonoscopy. The indications categories were surveillance, positive FOBT and symptoms. For the diagnoses, comparisons were made in those with and without polypectomy; those with and without cancer; and those with advanced cancer (stages III and IV) compared to those with no cancer or less advanced stages of cancer (stages I & II). Wilcoxon–Mann–Whitney test and the Kruskal–Wallis H test for non-normally distributed data were used to test for differences between subgroups. Differences between subgroups were also explored at the dimension level for both EQ-5D-5L and QLU-C10D.
The discriminant abilities of both EQ-5D-5L and QLU-C10D were further explored using Tobit regression models, with adjustment for potential confounders of HRQoL to reduce bias [ 37 ]. Confounders considered included age, gender, marital status, having a disability and comorbidities which are known to affect health-related quality of life [ 38 – 40 ]. In addition, employment, having private health insurance and education level as proxies for socioeconomic status [ 41 ] and previous history of cancer, history of surgery as well the indication for colonoscopy were considered as proxies for baseline health status [ 40 ]. Univariate analysis using spearman correlation was undertaken, and only variables with a significant correlation to HRQoL were included in the final regression model.
Tobit regression was applied because the HRQoL data were skewed with over 20% of respondents reporting full health at both baseline and follow-up for EQ-5D-5L and 10% for QLU-C10D. The best-fitting model was determined based on the log-likelihood, and a p-value of 0.05 was considered statistically significant. | Results
Study sample
The survey was sent to 644 individuals who had undergone colonoscopies between March 2017 and July 2019. Demographic details of participants are provided in Table 1 . The flow chart in Figure S1 shows the categories of participants.
246 respondents completed the surveys at baseline, a median of 38 days (IQR: 34, 43) after colonoscopy and 176 at follow-up, a median of 423 days (IQR: 414, 437) after colonoscopy. The baseline sample was predominantly male (54%) with a mean age of 64 years (SD = 8.2) and did not have private health insurance (60%). Slightly more respondents had a diagnostic colonoscopy because of a positive FOBT (37%) or symptoms (36%) compared to surveillance (26%), and 50% of the cohort underwent polypectomy at colonoscopy (Table 1 ). Sixty-nine (69) respondents to the baseline survey did not return the follow-up survey (Table S1). Non-responder demographic characteristics were similar to responders with differences observed in the indication for colonoscopy (46% undertaking a colonoscopy due to symptoms compared to 31% responders) and colonoscopy findings (29% diagnosed with cancer compared to 14% responders).
Health-related quality of life
HRQoL for the whole cohort did not differ between baseline and follow-up for both EQ-5D-5L [0.76 (SD = 0.22) and 0.76 (SD = 0.20), p value = 0.23 ] and QLU-C10D [0.74 (SD = 0.21 and 0.76 (SD = 0.2), p value = 0.58 ]. Marginally higher scores were observed with EQ-5D-5L than QLU-C10D at baseline, but scores were the same at follow-up.
Ceiling effects of EQ-5D-5L and QLU-C10D
24% (60/246) of respondents reported having the best possible level (no problems) for all dimensions of EQ-5D-5L at baseline and 22% (38/176) at follow-up, while 4% (11/246) and 6% (11/176) had the best possible levels, respectively, for QLU-C10D (Figure S2 and S3). Over 15% of respondents reported the highest level with all dimensions of EQ-5D-5L and QLU-C10D at both baseline and follow-up, which was an indication of ceiling effects [ 42 ].
51 (21%) respondents at baseline and 29 (16%) at follow-up reported full health (utility score = 1) on EQ-5D-5L but not on QLU-C10D. Participants reporting full health with EQ-5D-5L (no problems for all dimensions) still had problems with QLU-C10D, particularly with fatigue (61%) and sleep (59%) at baseline, and with role function (59%) and fatigue (69%) at follow-up where the majority reported less than the best possible level on QLU-C10D (Fig. 1 and 2 and Table S2).
No floor effects were observed with under 2.5% of respondents reporting the lowest levels on each dimension of the EQ-5D-5L and under 10% with QLU-C10D except for physical functioning where 33.5% reported the lowest level at follow-up (see figure S4 and S5).
Discriminant validity of EQ-5D-5L and QLU-C10D – bivariate analysis
Table 2 shows the ability of both measures to discriminate between participants with different colonoscopy findings and indications for colonoscopy. Neither measure discriminated between participants with different colonoscopy findings at both time points but discriminated between different indications for colonoscopy. Participants receiving colonoscopy because of symptoms had lower HRQoL at both baseline and follow-up as assessed by both EQ-5D-5L 0.71 (0.21), p value = 0.001 and 0.72 (0.20) p-value = 0.015 ) and QLU-C10D 0.67 (0.21), p value = < 0.001 and 0.67 (0.21), p value = < 0.001 ).
Responses to dimensions of the EQ-5D-5L at baseline
At the dimension level (Figure S6), significant differences between indications for colonoscopy were observed in individual responses to EQ-5D-5L dimensions of usual activities ( p value = 0.04 ), pain/discomfort ( p value = 0.03 ), and anxiety/depression ( p value = 0.05 ) at baseline (Fig. 3). Significantly less symptomatic individuals reported the best two levels [no problems or slight problems] for usual activities and pain/discomfort compared to individuals undergoing colonoscopy for surveillance or positive FOBT. However, more participants undergoing colonoscopy because of positive FOBT or symptoms reported the best two levels for anxiety/depression compared to those under surveillance.
Responses to dimensions of the QLU-C10D at baseline
Significant differences were observed at the dimension level with the QLU-C10D between the different colonoscopy findings (Figure S7) and indications for colonoscopy (Figure S8). More participants with no polyps compared to those having polyps reported no trouble or a little trouble (best two levels) with physical functioning ( p value = 0.05 ) and pain ( p value = 0.04 ). Differences in the appetite dimension were observed between respondents with no cancer and those with a cancer diagnosis ( p value = 0.003 ) as well as no cancer or early-stage cancer and advanced cancer ( p value = 0.002 ). More participants with no cancer or early-stage cancer reported the best level of appetite compared to cancer and advanced cancer, respectively.
Figure S6 shows the significant differences with QLU-C10D at baseline observed between indications for colonoscopy with role function ( p value = 0.01 ), appetite ( p value = 0.01 ), and bowel problems ( p value = 0.01 ), with significantly more participants undergoing colonoscopy for symptoms reporting the lower two levels (quite a bit of trouble and very much trouble) and less reporting the higher two levels (not at all and a little trouble) than those having a colonoscopy for surveillance or positive FOBT.
Responses to dimensions of the EQ-5D-5L at follow-up
No significant differences between colonoscopy findings or indications for colonoscopy were observed in individual responses to EQ-5D-5L dimensions at follow-up.
Responses to dimensions of the QLU-C10D at follow-up
Significant differences were observed between participants with no cancer and cancer (all stages) in responses to dimensions of physical functioning ( p value = 0.02 ) and social functioning ( p value = 0.03 ) as well as those with early-stage cancer and advanced cancer ( p value = 0.04 and p value = 0.01 ). More participants with cancer and advanced cancer reported high levels of trouble with physical functioning and social functioning compared to those without cancer and with no cancer or early-stage cancer (Figure S9). With the symptom scales, a significant difference was only observed with appetite ( p value = 0.03 ) when the severity of cancer was considered where only 45% of respondents with advanced cancer reported no lack of appetite compared to 81% of those with no cancer or early-stage cancer (Figure S9).
Consistent with findings at baseline, more participants undergoing colonoscopy due to symptoms had trouble with the QLU-C10D functional domains of physical functioning ( p value = 0.004 ), role functioning ( p value = < 0.001 ) and social functioning ( p value = < 0.001 ), see Figure S10. A similar trend was observed with the symptom domains where more respondents undergoing surveillance because of symptoms reported more trouble with appetite ( p value = 0.002 ) and bowel function ( p value = 0.01 ) compared to those undergoing colonoscopy for surveillance and positive FOBT (Figure S10).
Discriminant validity of EQ-5D-5L and QLU-C10D—Multivariable analysis
Following the univariate analysis, HRQoL at baseline and follow-up was significantly correlated with marital status, having a disability, fulltime employment, having private health insurance, a history of cancer, and a history of surgery (see Table S3). These variables were then adjusted for in the regression analysis. After controlling for these potential confounders, both EQ-5D-5L and QLU-C10D did not discriminate between colonoscopy findings at both baseline and follow-up (Table S4). EQ-5D-5L discriminated between respondents presenting with symptoms and positive FOBT or surveillance at baseline ( p value < 0.05 ), presenting with symptoms was associated with a lower HRQoL. QLU-C10D discriminated between participants presenting with symptoms and positive FOBT or surveillance colonoscopy. Participants undergoing colonoscopy due to symptoms had lower HRQoL compared to those undergoing a surveillance colonoscopy or due to positive FOBT at baseline ( p value = 0.001 ) and follow-up ( p value = 0.006 ). | Discussion
This research aimed to assess HRQoL for individuals undergoing diagnostic or surveillance colonoscopy for CRC, using both the generic EQ-5D-5L and cancer-specific QLU-C10D and evaluate whether HRQoL differed based on the scale used. This study showed no differences in HRQoL between baseline and follow-up, using either scale, in patients undergoing screening, surveillance or symptom-driven colonoscopies, except at the dimension level. There is a paucity of studies assessing HRQoL in patients undergoing colonoscopy, however, we hypothesised that there would be differences based on colonoscopy findings. This study showed no differences in colonoscopy findings/outcomes, including for patients diagnosed with cancer with both EQ-5D-5L and QLU-C10D. This suggests a single scale of quality of life, would be sufficient to measure HRQoL in post-colonoscopy cohorts during future studies. However, it is also possible that the expected change was not detected due to the small sample size, resulting from premature study termination during the COVID-19 pandemic. Besides, given QLU-C10D is a new instrument, there is no prior information on the expected change in scores in our target patient population.
Symptomatic patients had a lower overall HRQoL, compared to patients undergoing colonoscopy for screening or surveillance purposes. Using QLU-C10D, after controlling for potential confounders, gastrointestinal symptoms were associated with lower HRQoL, compared to surveillance or positive FOBT. At the dimension level, more symptomatic individuals reported lower HRQoL at baseline (both measures) and at follow-up except for anxiety/depression where more under surveillance reported lower levels. The lower HRQoL reports, at both baseline and follow-up, in the group presenting with symptoms can be attributed to more patients being diagnosed with cancer in this group compared to those undergoing a surveillance colonoscopy or following a positive FOBT. However, the difference in colonoscopy findings (no polyps, polyps or cancer) between groups was not statistically significant. Also, the multivariate analysis controlled for colonoscopy findings as a confounder yet the HRQoL difference was still observed (see Table 1 and Table S3).
Our study is different from previous studies assessing HRQoL before and after colonoscopy [ 18 , 43 ] because the respondents knew their colonoscopy results before the baseline assessment. Our results showed no association between colonoscopy findings and overall HRQoL. When assessing HRQoL (SF-36) and psychological distress among participants referred for colonoscopy following a positive FOBT, Vermeer et al. showed an increase in psychological dysfunction and worry following a cancer finding (2 weeks after colonoscopy) and a decline for those with no cancer. For those with a cancer diagnosis, psychological dysfunction declined to pre-colonoscopy measurements after 6 months [ 44 ]. Considering that participants in our study, unlike the above study, knew their colonoscopy findings at baseline (38 days after colonoscopy), this was not a true reflection of their baseline. This means that the change happened before the HRQoL assessment, and the baseline value reflected the post-colonoscopy HRQoL, which suggests that participants are returning to their true baseline level earlier than 6 months. More individuals undergoing surveillance colonoscopy reported problems with anxiety or depression (EQ-5D-5L) at baseline compared to those due to symptoms or positive FOBT, but this was not observed with emotional functioning on the QLU-C10D or after controlling for potential confounders. These results with the EQ-5D-5L agree with studies that suggest that individuals taking part in routine screening/surveillance report higher levels of anxiety over the possibility of cancer [ 45 , 46 ]. Yet the lack of difference in emotional functioning observed with QLU-C10D is also supported by several studies that argue that participation in colorectal cancer screening [ 16 ] or the results of a colorectal cancer screening colonoscopy have no effects on participants' psychological well-being [ 17 , 47 ].
After controlling for potential confounders, participants undergoing colonoscopy because of symptoms had lower HRQoL/utility scores with QLU-C10D at both baseline and one year compared to those having surveillance colonoscopy or because of a positive FOBT (and only at baseline with EQ-5D-5L). At the dimension level, more individuals presenting with symptoms reported lower levels for usual activities and pain/discomfort on the EQ-5D-5L at baseline. Using QLU-C10D, participants with symptoms reported lower levels for role function, appetite and bowel problems at both baseline and follow-up. This finding, particularly appetite and bowel problems is not surprising because these are common presenting signs in people undergoing colonoscopy in general [ 48 ] and under investigation for colorectal cancer [ 49 ]. It is particularly important to note that this difference is observed with the cancer-specific scale, QLU-C10D, whose dimensions include disease-related symptom dimensions, unlike the generic scale.
We, therefore, explored whether the scales used were suitable and sensitive to HRQoL changes in this population. Participants reporting full health with EQ-5D-5L (the best level on all dimensions) still had problems according to QLU-C10D, particularly with fatigue, sleep and role function. This suggests that QLU-C10D is more sensitive than the generic scale in picking up differences in HRQoL and could potentially be used in all future studies of post-colonoscopy HRQoL, including in non-cancer cohorts. This result was similar to that observed when EQ-5D-5L was compared to the cancer-specific HRQoL scale FACT-8D [ 50 ] and when the three-level version of EQ-5D, EQ-5D-3L was compared to the cancer-specific EORTC-8D, which like QLU-C10D, is derived from the EORTC QLQ-C30 [ 19 ]. Both studies showed that compared to the cancer-specific scales, the generic scales failed to detect impairments with fatigue and sleep disturbances. These findings highlight the gap within the EQ-5D descriptive system, supporting the argument by Chen and Olsen, and Sprouk et al., 2021 to add sleep and fatigue bolt-on dimensions to the EQ-5D descriptive system [ 51 , 52 ].
Similar to other studies [ 19 , 21 ], our results suggest that when assessing short-term HRQoL outcomes in populations undergoing diagnostic colonoscopy for cancer, the cancer-specific scale is more sensitive compared to generic scales of HRQoL. The sensitivity of a scale is critical when assessing the cost-effectiveness of interventions in a particular population. A more sensitive scale will detect change where change would otherwise not have been detected with a less sensitive scale and this influences the incremental cost-effectiveness ratio (ICER) results, and subsequently, the cost-effectiveness decision. While decision-making bodies (e.g. NICE, PBAC and MSAC) require the use of generic scales for purposes of economic evaluations [ 53 – 55 ], we propose that economic evaluations assessing HRQoL in this setting should consider both the generic scale and the cancer-specific scale as it is more sensitive to differences.
Limitations of this study include the low response rate (38%), which although similar to other quality-of-life postal surveys [ 56 ], can be attributed to the premature termination of the study data collection due to the COVID-19 pandemic in 2019/2020. We also observed that more participants undergoing colonoscopy due to symptoms and those diagnosed with cancer did not respond to the survey, which indicates a response bias. There are known differences based on diagnosis after colonoscopy and that was hypothesised in this study; however, the study did not have the power to detect the expected change due to the small sample size, resulting from premature study termination. Furthermore, given QLU-C10D is a new instrument, there is no prior information on the expected change in scores in our target patient population. We therefore recommend a larger future study with both asymptomatic and symptomatic patients. Future research should assess anxiety levels and cancer concerns in symptomatic patient cohorts, to assess whether these concerns persist despite a colonoscopy ruling out cancer, and if so, how such concerns can be better allayed. Also, our baseline survey was conducted after participants had received their colonoscopy results. As such we cannot provide a direct comparison to previous studies whose baseline assessments were before the colonoscopy procedure. Another possible limitation is that the EQ-5D-5L utility scores were generated based on a UK population value set as recommended by NICE [ 31 ], while the QLU-C10D was valued based on an Australian population. Another possible limitation is the ordering effect as the survey maintained the generic EQ-5D-5L before the EORTC-QLQ-C30 at both baseline and follow-up. However, studies have shown that presentation order only has a marginal effect on the patient responses to HRQoL scales [ 57 , 58 ]. | Conclusion
HRQoL does not change one year following a diagnosis of the bowel at colonoscopy and it does not differentiate between different colonoscopy diagnoses including cancer. However, patients undergoing colonoscopy because of symptoms have poorer HRQoL compared to those undergoing surveillance colonoscopy for cancer. In addition, a cancer-specific scale is more sensitive than a generic scale to HRQoL differences in patients undergoing colonoscopy. | Purpose
To compare the sensitivity and discriminant validity of generic and cancer-specific measures for assessing health-related quality of life (HRQoL) for individuals undergoing diagnostic or surveillance colonoscopy for colorectal cancer.
Methods
HRQoL was assessed using EQ-5D-5L (generic), and EORTC QLQ-C30 (cancer-specific) scales, 14 days after (baseline) and one-year following colonoscopy (follow-up). Utility scores were calculated by mapping EORTC-QLQ-C30 onto QLU-C10D. Differences between participants with different indications for colonoscopy (positive faecal occult blood test (FOBT), surveillance, or symptoms) and colonoscopy findings (no polyps, polyps, or cancer) were tested using Wilcoxon-Mann–Whitney and Kruskal–Wallis H tests. Sensitivity was assessed by calculating the ceiling effects (proportion reporting the best possible level).
Results
246 adults completed the survey, including those undergoing colonoscopy for symptoms ( n = 87), positive FOBT ( n = 92) or surveillance ( n = 67). Those with symptoms had the lowest HRQoL at both baseline and follow-up, with differences observed within the HRQoL domains/areas of role function, appetite loss and bowel function on the QLU-C10D. No differences were found in HRQoL when stratified by findings at colonoscopy with both measures or when comparing baseline and follow-up responses. Participants reporting full health with EQ-5D-5L (21% at baseline and 16% at follow-up) still had problems on the QLU-C10D, with fatigue and sleep at baseline and with role function and fatigue at follow-up.
Conclusion
Patients undergoing colonoscopy for symptoms had lower HRQoL compared to surveillance or positive FOBT. The cancer-specific QLU-C10D was more sensitive and had greater discriminant ability between patients undergoing colonoscopy for different indications.
Supplementary Information
The online version contains supplementary material available at 10.1007/s10552-023-01789-6.
Keywords
Open Access funding enabled and organized by CAUL and its Member Institutions | Supplementary Information
Below is the link to the electronic supplementary material. | Author contributions
All authors contributed to the study’s conception and design. Material preparation, data collection and analysis were performed by ES, NB, GC and EM. The first draft of the manuscript was written by NB, and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Funding
Open Access funding enabled and organized by CAUL and its Member Institutions. Research funding for this study was provided by the National Health and Medical Research Council Project Grant (APP1101837). Author N Bulamu was supported by a grant funded by the financial support of Cancer Council SA's Beat Cancer Project on behalf of its donors and the State Government of South Australia through the Department of Health together with the support of the Flinders Medical Centre Foundation, its donors and partners.
Data availability
The datasets generated during and/or analysed during the current study are available from the corresponding author upon reasonable request.
Declarations
Conflict of interest
None declared.
Ethical approval
The study protocol was approved by the Southern Adelaide Human Research Ethics Committee (reference #443.16). The study was prospectively registered with the Australian New Zealand Clinical Trials Registry (ANZCTR reference #12617000003370).
Consent to participate
All participants provided written informed consent with the return of the survey.
Consent to publish
The authors affirm that human research participants provided informed consent for publication. | CC BY | no | 2024-01-15 23:41:52 | Cancer Causes Control. 2024 Sep 25; 35(2):347-357 | oa_package/c8/ac/PMC10787680.tar.gz |
PMC10787681 | 0 | Introduction
Peptidyl-lys metalloendopeptidases, more commonly known as LysN (EC 3.4.24.20), are enzymes that preferentially hydrolyze substrates at the N-terminus of the lysine residues (Barrett et al. 2004 ). The first LysN peptidases were isolated from various bacterial and fungal sources, such as My -LysN (from Myxobacter AL-1), Am -LysN (from Armillaria mellea ), Po- LysN (from Pleurotus ostreatus) , and Gf- LysN (from Grifola frondosa) (Dohmae et al. 1995 ; Lewis et al. 1978 ; Nonaka et al. 1995 ; Wingard et al. 1972 ). These early reports concluded that peptidases in this group have a pH optimum that ranges from neutral to alkaline (pH 7.0–9.5) . These enzymes were also found to be quite thermostable and exhibited relatively high resistance to denaturants such as urea and guanidine hydrochloride. Gf -LysN (Saito et al. 2002 ) and Am -LysN (Ødum et al. 2016 ) have also been recombinantly expressed in Komagataella phaffii , both as inactive protein (zymogen) and mature (active) protein using the full-length native pre-pro-protein, pro-protein, and mature protein coding sequences.
As a self-protection mechanism against inadvertent proteolysis within the cell, endopeptidases are often recombinantly expressed and secreted as inactive precursors, called zymogens or proenzymes, containing inhibitory N-terminal pro-peptides (Demidyuk et al. 2010 ). The cleavage of pro-peptides from these inactive precursor proteins sometimes occurs through proteolysis by pro-protein convertases within the host’s secretory pathway, resulting in the production of biologically active proteins or peptides. Kex2, also known as kexin (EC 3.4.21.61), was the first identified pro-protein convertase involved in the processing of α-mating factor and killer toxin precursors in Saccharomyces cerevisiae. Kex2 preferentially hydrolyzes peptide bonds at the C-terminus of lysine-arginine and arginine-arginine residues (Fuller et al. 1988 ). Proper protein folding and secretion are quite often the two limiting steps in heterologous protein expression. Even small changes in the pro-peptide sequence can have a consequential impact on the expression and activity of the recombinant peptidase (Boon et al. 2020 ). Higher expression levels of recombinant Am -LysN were reported by Ødum et al. ( 2016 ) when native pro-peptide sequence was used.
Owing to their strict cleavage specificity, high thermostability, and ability to withstand denaturants, LysN peptidases— Gf -LysN, especially—have attracted researchers to explore the potential of their application in proteomics experiments. Gf -LysN has been reported to perform equally well as trypsin, which is the preferred peptidase for mass spectrometry (MS)-based proteomics (Taouatas et al. 2010 ). Trypsin, however, may fail to produce MS-identifiable peptides derived from the carboxy termini of proteins due to the lack of amino acids that can easily accept protons. To better identify C-terminal peptides, peptidases that cleave at the N-terminal of basic amino acids such as LysN (Raijmakers et al. 2010 ; Taouatas et al. 2008 ) and the recently introduced LysargiNase (Huesgen et al. 2014 ; Tallant et al. 2006 ) could be ideal for the generation of positively charged C-terminal peptides that are compatible with LC-MS/MS. LysN also functions as a sister enzyme to LysC, which preferentially hydrolyzes C-terminal lysine residues (Raijmakers et al. 2010 ; Zhao et al. 2020 ).
The aim of this work was to produce a novel LysN peptidase with robust biochemical characteristics that is applicable over a broad pH range. The recombinant LysN presented in this study was identified in Trametes coccinea BRFM310 . The LysN from T. coccinea BRFM310 (named “ Tc -LysN”) was recombinantly expressed in Komagataella phaffii and purified to homogeneity by a single-step anion-exchange chromatography . Tc -LysN was biochemically characterized and the mechanism involved in its maturation was also evaluated. | Materials and methods
Chemicals and equipment
Analytical grade reagents and chemicals were procured from either Merck (Darmstadt, Germany) or Carl Roth (Karlsruhe, Germany) unless stated otherwise. Azocasein was obtained from Megazyme (Limerick, Ireland). Spectrophotometric analyses were carried out in Epoch 2, manufactured by Biotek (Winooski, USA). Flat-bottom 96-well microtiter plates were purchased from Carl Roth (Karlsruhe, Germany). SDS-PAGE was carried out in Mini Gel Tank by Thermo Fisher Scientific (Dreieich, Germany). All the equipment for nano-LC-ESI-MS/MS experiments was manufactured by Thermo Fisher Scientific (Dreieich, Germany). Protein purification via liquid chromatography was carried out on Äkta Go manufactured by GE Healthcare Biosciences (Uppsala, Sweden). Reaction vessels were incubated in ThermoMixer® C manufactured by Eppendorf (Hamburg, Germany).
Gene fragment, plasmid, strains, media, and kits
The native pro-protein (zymogen) sequence of Trametes coccinea ’s peptidyl-lys metalloendopeptidase (UniProt accession# A0A1Y2IQZ5) was back-translated and codon optimized for expression in Komagataella phaffii . The gene fragment was ordered for synthesis at Twist Bioscience (San Francisco, CA, USA). The proprietary expression vector, pBSY2S1Z (plasmid map can be found in Online Resource 1 ), and the proprietary expression host, Komagataella phaffii BG10, were procured from BISY (Graz, Austria). E. coli DH5α was purchased from New England Biolabs (Frankfurt am Main, Germany). Invitrogen’s Pichia EasyCompTM Transformation Kit (Thermo Fisher Scientific, Dreieich, Germany) was used to prepare and transform competent Komagataella phaffii BG10 cells. Restriction digestion enzymes, ligase(s), and buffer(s) were sourced from New England Biolabs (Frankfurt am Main, Germany). All media, including Luria-Bertani (LB), yeast extract peptone dextrose (YPD), buffered glycerol complex (BMGY), and buffered methanol complex (BMMY), were prepared according to the guidelines of Invitrogen’s Pichia Expression Kit (Publication # MAN0000012). Zeocin® was purchased from Invivogen (Toulouse, France). Molecular biology kits, including plasmid miniprep, DNA purification, and gel extraction, were sourced from Zymo Research Europe (Freiburg im Breisgau, Germany).
Construction of pBSY2S1Z— Tc -LysN plasmid and expression of Tc -LysN
The native pro-protein gene for Tc -LysN was cloned in-frame with the S. cerevisiae ’s α-mating factor secretory signal into pBSY2S1Z, under the control of the methanol-inducible AOX1 promoter, via golden gate cloning (Engler et al. 2008 ) using SapI restriction sites. Chemically competent E. coli DH5α cells were transformed with the resulting expression vector, pBSY2S1Z– Tc -LysN. Isolated recombinant plasmids from single-colony transformants, selected on low salt Luria-Bertani (LB) plates supplemented with 25 μg*mL −1 Zeocin®, were sent for DNA sequencing to Genewiz (Leipzig, Germany).
Komagataella phaffii BG10 transformation was carried out by stringently following the guidelines of Invitrogen’s Pichia EasyCompTM Transformation Kit. In brief, 50 μL chemically competent Komagataella phaffii BG10 cells were transformed using ∼10 μg purified pBSY2S1Z– Tc -LysN that was linearized by SacI-HF (according to NEB’s protocol). Single colonies were screened for enzyme activity after ~24 h of induction in BMMY at 30 °C using the standard method outlined in Invitrogen’s Pichia EasyCompTM Transformation Kit. Tc -LysN activity was analyzed by employing the azocasein assay (“ Determination of endopeptidase acting using the azocasein assay ” section).
Quantification of protein content
RotiNanoquant 5X (Carl Roth, Karlsruhe, Germany) was used to estimate protein content based on the method established by (Bradford 1976 ). In short, 50 μL sample and/or standard was mixed with 200 μL RotiNanoquant (1X) and incubated at 30 °C for 5 min in dark. The absorbance at 450 nm and 590 nm was measured using the microtiter plate reader. A calibration curve was generated using bovine serum albumin (BSA; Biowest, Nuaillé, France) as the standard protein within the range of 0–150 μg*mL −1 .
Sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and molecular mass estimation
Laemmli’s ( 1970 ) protocol was employed for SDS-PAGE with minor modifications. A gradient gel (4–20%; NovexTM WedgeWellTM Tris-Glycine, Thermo Fisher Scientific GmbH, Dreieich, Germany) was used to separate proteins. Broad range protein markers (10–200 kDa; P7719S, New England Biolabs GmbH, Frankfurt am Main, Germany) were utilized as reference proteins for molecular mass estimation. Protein bands were visualized by Coomassie staining using GelCodeTM Blue Safe dye (Thermo Fisher Scientific GmbH, Dreieich, Germany).
In-gel digestion
In-gel digestion of SDS-PAGE bands was performed using trypsin (Roche, Germany) according to the method of Shevchenko et al. ( 1996 ). Following enzymatic digestion, the resulting supernatant was transferred to a fresh tube, subjected to drying using a vacuum centrifuge, and then preserved at –20 °C. Subsequently, the dried samples were reconstituted in a solution containing 0.1% (v/v) trifluoroacetic acid (TFA) for analysis using nano-liquid chromatography–tandem mass spectrometry (nano-LC-MS/MS). Identification of neo-N-Termini was performed by in-gel labeling of protein N-termini using TMTpro Zero (Thermo Fisher Scientific) according to the manufacturer’s instructions. Briefly, gel bands were excised with a scalpel from the SDS-PAGE gel and cut into small cubes. Gel cubes were completely covered with acetonitrile and incubated for 10 min. Subsequently, the supernatant was removed and gel cubes were covered with 50 mM HEPES pH 8.5 and incubated for 10 min. This procedure was repeated three times to completely remove the SDS/Tris-HCl buffer from the gel cubes. TMTpro Zero labeling was performed by adding 10 μL TMTpro Zero reagent (10 μg*μL −1 in HEPES pH 8.5) and 150 μL of 50 mM HEPES pH 8.5 to the sample, followed by incubation for 1 h at room temperature. The labeling reaction was stopped by adding 15 μL hydroxylamine (5% v/v). Reduction, alkylation, and further processing of the gel cubes was performed according to the reference protocol of Shevchenko et al. ( 1996 ), with the exception that trypsin and chymotrypsin (Roche, Penzberg, Germany) were used as peptidases.
In-solution digestion of bovine serum albumin (BSA) using Tc -LysN
Two microgram bovine serum albumin (Sigma-Aldrich, Taufkirchen, Germany) was dissolved in 2 M urea, 10 mM ammonium hydrogencarbonate (pH 8.8). Dithiothreitol (DTT) was added to a final concentration of 10 mM for reduction of cysteines. The samples were then incubated for 30 min at 37 °C under shaking at 1000 rpm. Alkylation of cysteines was performed by adding 30 mM iodoacetamide, followed by incubation at 37 °C for 20 min in the dark. Alkylation was stopped by adding 50 mM DTT and the pH was adjusted by adding ammonium acetate (pH 5.0) to a final concentration of 100 mM. At a peptidase:substrate ratio of 1:50, 40 ng Tc -LysN peptidase was added and the samples were digested overnight at 40 °C. In a control experiment, BSA was digested overnight with trypsin in 10 mM ammonium hydrogencarbonate (pH 8.8) at 37 °C using a peptidase:substrate ratio of 1:20. Peptide mixtures were concentrated and desalted on C18 stage tips (Rappsilber et al. 2003 ) and dried under vacuum. Dried samples were dissolved in 30 μL 0.1 % (v/v) TFA and aliquots of 1 μL were injected for nanoLC-MS/MS analyses.
Nano-LC-MS/MS
Nano-LC-ESI-MS/MS experiments were carried out using an Ultimate 3000 RSLC nano system (Dionex, Thermo Fisher Scientific, Germany) connected to an Orbitrap Exploris 480 mass spectrometer (Thermo Fisher Scientific, Germany) through an EASY-Nano Flex ion source (Thermo Fisher Scientific, Germany). Tryptic peptides were injected directly to a pre-column (μ-pre-column C18 PepMap100, 300 μm, 100 Å, 5 μm × 5 mm, Thermo Fisher Scientific) that was connected to a NanoEase analytical column (NanoEase M/Z HSS C18 T3, 1.8 μm, 100 Å, 75 μm × 250 mm column, Waters GmbH, Germany). The columns were operated at 35 °C. Flow rate during gradient elution was maintained at 300 nL*min −1 . An LC-gradient with the following profile was implemented: 2–55% solvent B in 27 min, 55–95% solvent B in 10 min, 5 min isocratic at 95% solvent B, 95–2% solvent B in 10 min, re-equilibration for 5 min at 2% solvent B. Solvent A was 0.1% (v/v) formic acid in H 2 O. Solvent B was 0.1% (v/v) formic acid in acetonitrile. XCalibur version 4.4 (Thermo Fisher Scientific Inc., USA) controlled the Orbitrap Exploris 480. Survey spectra (m/z = 300–1500) were detected in the Orbitrap at a resolution of 60.000 at m / z = 200. Data-dependent MS/MS mass spectra were generated for the 30 most abundant peptide precursors using high energy collision dissociation (HCD) fragmentation at a resolution of 15,000 with normalized collision energy of 30.
MS data analysis
Proteins were identified in Mascot 2.6 (Matrix Science, UK). Spectra were searched against the Swissprot database or the Trametes coccinea BRFM310 protein database downloaded as FASTA-formatted sequences from UniProt (www.uniprot.org). Search parameters specified LysN as cleaving enzyme, a 5-ppm mass tolerance for peptide precursors, and 0.02 Da for fragment ions. Alternatively, no search enzyme was specified for an unspecific search. Carbamidomethylation of cysteine residues was defined as fixed modification. Methionine oxidation was allowed as variable modification. For TMTpro Zero experiments trypsin and chymotrypsin were specified as cleaving enzymes and TMTpro Zero was allowed as variable modification at peptide N-termini and lysine. Mascot search results were imported into Scaffold version 4.10.0. (Proteome Software, USA). Peptide identifications were accepted with a peptide probability greater than 90.0% as specified by the Peptide Prophet algorithm (Keller et al. 2002 ). Proteins had to be identified by at least two peptides and a protein probability of at least 99% to be accepted. Protein probabilities were assigned by the Protein Prophet algorithm (Nesvizhskii et al. 2003 ).
Production of Tc -LysN in shake-flasks
Tc -LysN was produced inside a 2 L baffled shake-flask containing 200 mL BMMY, by adapting a high-cell density fermentation method (Kaushik et al. 2016 ). In short, a preculture of recombinant K. phaffii was cultivated overnight at 30 °C in 50 mL BMGY (containing 100 μg*mL −1 zeocin). The preculture was used to inoculate 200 mL of BMGY inside a 2 L baffled shake-flask at 30 °C and 180 rpm until the OD 600nm reached ~47.0. The cells were then pelleted and resuspended in 200 mL BMMY (containing 0.5% methanol) inside a 2 L baffled shake-flask. Tc- LysN expression was induced for 96 h at 30 °C and 180 rpm with 0.5% methanol supplementation every ~12 h. At the end of the induction phase, the culture medium was centrifuged at 5000 × g and the supernatant was filtered through a 0.22 μm membrane. The supernatant was then concentrated ~20-fold using a 5 kDa cross-flow membrane (Vivaflow® 50 PES, Sartorius Stedim, Goettingen, Germany) on ice. The concentrated Tc -LysN was then buffer exchanged with 20 mM MOPS buffer (pH 7.2) using the same cross-flow membrane.
Purification of Tc -LysN
Protein purification was performed at room temperature on the Äkta Go purification system. The filtered, concentrated, and buffer-exchanged Tc -LysN was loaded onto a HiTrap Q FF 5 mL column (GE Healthcare Bio-Sciences, Munich, Germany) via the sample injection pump. The column was pre-equilibrated with the binding buffer (20 mM MOPS pH 7.2). Sample was applied at a flowrate of 2 mL*min −1 . Unbound protein was washed out with 22 column volumes of binding buffer at a flowrate of 10 mL*min −1 . The elution was carried out with a linear gradient (0–65%) using the elution buffer (20 mM MOPS + 1 M NaCl pH 7.2) at a flowrate of 2 mL*min −1 . Protein detection was monitored at 280 nm. Fractions were tested for Tc- LysN activity using the azocasein assay. Fractions with the highest Tc- LysN activity were pooled and then dialyzed against 5 mM sodium acetate buffer (pH 5.0). Aliquots of the dialyzed Tc- LysN were stored at –20 °C until further use.
Activation of Tc -LysN zymogen
In order to investigate the potential involvement of an endogenous K. phaffii endopeptidase with trypsin-like activity in the maturation of Tc -LysN zymogen into its active form, 250 μL of concentrated BMMY culture supernatant (post 24 h induction) was incubated with 425 USP-U of porcine pancreatic trypsin (Carl Roth, Karlsruhe, Germany) at pH 7.5 (60 mM MOPS) for 60 min at 37 °C. To account for any changes induced simply because of temperature and/or prolonged incubation time, two controls were also set-up: one with only culture supernatant without trypsin and another with only trypsin and no culture supernatant. Samples were withdrawn at specified time points to prepare for SDS-PAGE analysis and assessment of Tc -LysN’s activity. Samples for SDS-PAGE analysis were immediately mixed with reducing sample loading buffer and heated for 5 min at 90 °C before being loaded onto a 4–20% precast gradient gel (NovexTM WedgeWellTM Tris-Glycine, Thermo Fisher Scientific GmbH, Dreieich, Germany). Samples withdrawn for Tc -LysN’s activity measurement were first incubated with 10 mM phenylmethylsulfonyl fluoride (PMSF; Carl Roth, Karlsruhe, Germany) for 30 min at room temperature to inactivate trypsin. The experiment was additionally conducted with varied parameters such as trypsin dose, incubation time, and incubation temperature.
Biochemical characterization
Determination of endopeptidase acting using the azocasein assay
The azocasein assay was performed to determine the proteolytic activity of Tc -LysN according to the method of Iversen and Jørgensen (1995) with slight modifications (Ahmed et al. 2022 ). Substrate stock solution, 3% (w/v), was prepared by dissolving azocasein in H 2 O dd . The assay was performed as follows: 200 μL of sodium acetate buffer (pH 5.0, 50 mM final concentration) and 30 μL of the azocasein stock solution was added to a 1.5 mL microfuge tube. The substrate was equilibrated within the specified temperature range (37 °C–90 °C) for 5 min. The hydrolysis was initiated by adding 10 μL of appropriately diluted and separately pre-equilibrated (37 °C–90 °C) purified Tc -LysN. The hydrolysis was carried out within the specified temperature range (37 °C–90 °C) in a thermo mixer at 1000 rpm. The hydrolysis was terminated at various time intervals by dispensing 30 μL of 2 M tricholoroacetic acid (TCA). For blanks, 30 μL of 2 M TCA was added prior to the addition of enzyme under the same conditions. The spectrophotometric analysis was carried out by dispensing 150 μL of 1 M NaOH into microtiter plate wells, followed by 150 μL supernatant from the centrifuged hydrolysates. The absorbance was measured at 450 nm using the microtiter plate reader after 15 s of linear shaking at room temperature. One azocasein unit (ACU) was defined as the increase of 1 absorbance unit*min −1 *mL −1 at 450 nm.
Determination of pH optimum, temperature maximum, and thermostability
The azocasein assay was employed to characterize Tc -LysN biochemically. The pH optimum was determined by measuring the proteolytic activity after 5 min at 37 °C. Buffers (50 mM final concentration) with overlapping pH range (sodium citrate-citric acid pH 3.0–4.0; sodium acetate pH 4.0–5.5; MES pH 5.5–6.6; MOPS pH 6.6–7.5; Tris-HCl pH 7.5–8.5; glycine-HCl pH 8.5–10) were utilized to simultaneously evaluate the effect of buffer salts on proteolytic activity. The temperature-maximum was determined by measuring proteolytic activity at pH 5.0 in 50 mM (final concentration) sodium acetate buffer within the temperature range of 10–90 °C after 10 min. Additionally, aliquots of the purified Tc -LysN were incubated for ~18 h at temperature intervals between 0 °C—80 °C. The thermostability of Tc -LysN was evaluated by measuring the proteolytic activity at 60 °C in sodium acetate buffer (final concentration 50 mM, pH 5.0) using the aforementioned pre-incubated Tc -LysN aliquots. Proteolytic activities were calculated as averages of triplicate measurements for each experiment.
Effect of ions, solvents, reducing agents, and peptidase inhibitors on proteolytic activity
The azocasein assay was employed to determine the effect of mono/divalent ions, solvents, reducing agents, and peptidase inhibitors at various concentrations on the proteolytic activity of the purified Tc -LysN. The assay was carried out in sodium acetate buffer (50 mM final concentration, pH 5.0) at 60 °C. The purified Tc -LysN was incubated with the respective test substances at final concentrations of 5 mM and/or 10 mM for 10 min at 60 °C before initiating the hydrolysis by the addition of the pre-equilibrated (60 °C) substrate. Proteolytic activity observed without the addition of any test substance was defined as 100% activity. Proteolytic activities were calculated as averages of triplicate measurements for each experiment.
GenBank accession numbers
The GenBank accession number for the primary amino acid sequence of peptidyl-lys metalloendopeptidase from T. coccinea BRFM310 ( Tc -LysN) is OSD03546.1. The GenBank accession number for the codon-optimized synthetic nucleotide sequence used to express Tc -LysN in K. phaffii is OR161067. | Results
Production of Tc -LysN in K. phaffii , its purification and activation
Tc -LysN was expressed as pre-pro-protein using the native pro-protein sequence and S. cerevisiae ’s α-mating factor secretory signal. Tc -LysN was secreted into BMMY culture broth as pro- Tc -LysN (zymogen) as well as Tc -LysN (active enzyme) upon induction with 0.5% methanol every 12 h for up to 96 h. No significant change in the band intensities of pro- Tc -LysN and Tc -LysN was observed on SDS-PAGE after ~44 h of induction (data not shown) which suggested that pro- Tc -LysN matured into its active form intracellularly. The filter-sterilized culture broth was concentrated using a 5 kDa cross-flow membrane. The concentrated culture broth was buffer-exchanged with 20 mM MOPS buffer (pH 7.2) and applied onto HiTrap Q FF 5 mL column for anion-exchange chromatography (AEX). Fractions with proteolytic activity eluted between 16 and 22 mS*cm −1 (Fig. 1 A). The purified mature Tc -LysN migrated as a single band of ~19.8 kDa on SDS-PAGE and pro- Tc -LysN from the culture supernatant migrated as a band of ~38 kDa (Fig. 1 B and Online Resource 2 ). The concentration of the purified Tc -LysN was ~1.3 mg*L −1 as determined by the Bradford assay. Total enzyme activity of ~40 ACU was obtained from the culture broth at the end of purification with an activity yield of ~9%.
For identity verification, the ~19.8 kDa band of purified protein (Fig. 1 B and Online Resource 2 ) was subjected to in-gel digestion with trypsin followed by nano-LC–ESI–MS/MS analysis. The ~19.8 kDa band was identified as peptidyl-lys metalloendopeptidase from Trametes coccinea (Uniprot Accession Number A0A1Y2IQZ5). Nano-LC-ESI-MS/MS analysis revealed a 40% sequence coverage of the Tc -LysN precursor protein sequence by tryptic peptides. No tryptic peptides from the N-terminus of the precursor protein, including the potential pro-peptide, were identified. The predicted amino acid sequence of the Tc -LysN zymogen was aligned with the annotated amino acid sequences of Gf- LysN zymogen and Am -LysN zymogen using National Center for Biotechnology Information (NCBI)’s constraint-based multiple alignment tool (COBALT). The mature Tc -LysN shares 60.98% homology with mature Am- LysN and 74.23% homology with mature Gf -LysN (Fig. 2 B). To identify the N-terminus of the active Tc -LysN peptidase, the ~19.8 kD band of mature Tc -LysN was labeled in-gel using TMTpro Zero reagent and the gel band was subsequently digested with trypsin and chymotrypsin. TMTpro Zero reacts with alpha amino groups of intact proteins and lysine amino acid side chains. Therefore, peptides with a labeled alpha amino group should comprise the protein N-terminus (Kleifeld et al. 2010 ). Nano-LC-MS/MS analysis of the tryptic and chymotryptic peptides (Online Resource 3 ) revealed that peptides labeled with TMTpro Zero at the alpha amino group contained Glu185 as the N-terminal amino acid, which indicated that the active Tc -LysN peptidase might be produced by removing the first 184 amino acids (Fig. 2 A). This finding suggested that the cleavage of the pro-peptide occurred at the Kex2 cleavage site (KR↓), which is natively present in the pro- Tc -LysN (Fig. 2 A). To assess whether or not a trypsin-like endogenous K. phaffii endopeptidase was involved in the cleavage of Tc -LysN pro-peptide, the culture supernatant was incubated with porcine pancreatic trypsin under varied experimental conditions and samples were collected at specified time intervals to evaluate the effect on Tc -LysN’s activity and behavior on SDS-PAGE. The bands of Tc -LysN zymogen and mature Tc -LysN on SDS-PAGE remained unchanged over the course of 60 mins incubation with catalytic amount of trypsin (Online Resource 4 ). Also, no increase in Tc -LysN’s activity in the culture supernatant was observed. Similar results were obtained when this experiment was repeated with variations is parameters such as trypsin dose, incubation time, and incubation temperature (data not shown). It was, thus, concluded that an endogenous K. phaffii peptidase with trypsin-like activity is not involved in the activation of the Tc -LysN zymogen. The molecular weight, based on the identified N-terminus of the active Tc -LysN, was calculated be 18.3 kDa, which is in close agreement with the molecular weight deduced from the band on SDS-PAGE (Fig. 1 B).
Biochemical characterization
The proteolytic activity of Tc -LysN under various pH and temperature conditions as well as in the presence of chaotropes, organic solvents, and different divalent cations was evaluated using azocasein as substrate. Buffers with overlapping pH range were used to simultaneously analyze the effect of buffer salts on proteolytic activity. Tc -LysN exhibited maximum activity at 60 °C in 50 mM sodium acetate at pH 5.0 (Fig. 3 A and 3 B), while it maintained >50% activity between 40 °C—70 °C. Tc -LysN maintained >60% activity between pH 4.5—7.5 and 30%—40% activity between pH 8.5—10.0. Tc- LysN’s thermostability was evaluated by testing the proteolytic activity of Tc -LysN aliquots that had been incubated for ~18 h at various temperatures under optimum assay conditions. Figure 3 C summarizes the effect of various temperatures on the thermostability of Tc -LysN after 18 h of incubation. Tc -LysN maintained ~65% activity after 18 h of incubation at 50 °C and even retained ~20% activity after being incubated at 80 °C for 18 h.
The influence of different divalent cations, solvents, and denaturing agents on Tc -LysN’s proteolytic activity was studied to characterize its performance under variable sample preparation conditions. The results are summarized in Table 1 . Tc -LysN’s activity was found to be enhanced by high concentrations of organic solvents. Acetonitrile at a concentration of 40% (v/v) enhanced Tc -LysN’s activity up to ~100%, whereas 40% (v/v) methanol increased the activity up to ~50%. Up to 8 M urea had no significant negative effect on Tc -LysN’s activity. In contrast, >1 M guanidine hydrochloride was not tolerated by Tc -LysN and the activity was reduced down to ~30% in its presence. With the exception of 5 mM Cu 2+ , which reduced Tc -LysN’s activity down to ~20%, none of the other divalent cations (up to 10 mM) affected Tc -LysN’s activity negatively. Cobalt at 5 mM was found to enhance Tc -LysN’s activity up to ~60%. Among peptidase inhibitors, only 33.33 mM EDTA was found to completely inactivate Tc- LysN after overnight incubation at 4 °C.
In-solution digestion of bovine serum albumin (BSA) with Tc-LysN
Tc -LysN was evaluated for its application in proteomics experiments. Bovine serum albumin (BSA), a typical standard protein used in proteomics workflows, was used as substrate. Two microgram BSA was digested in solution with Tc -LysN at a peptidase:substrate ratio of 1:50. The digestion was carried out overnight at 40 °C using a volatile ammonium acetate buffer pH 5.0. The subsequent nano-LC-MS/MS analysis revealed 84% sequence coverage of BSA with Tc -LysN peptides (Online Resource 5 ). In a control experiment using trypsin as peptidase, 90% sequence coverage of BSA was observed (data not shown). However, the number of identified peptides (115 unique peptides) was higher compared to the Tc -LysN digest (78 unique peptides). These experiments indicate that Tc -LysN may be suited for proteomics experiments equally well as established peptidases like trypsin. | Discussion
Purification of LysN peptidases from the fruiting bodies of organisms such as Grifola frondosa is a labor- and resource-intensive procedure as demonstrated by Nonaka et al. ( 1995 ), Stressler et al. ( 2014 ), and more recently by Zhao et al. ( 2020 ). Recombinant expression of LysN simplifies the downstream processing significantly.
Production of Tc -LysN in K. phaffii , its purification and activation
The nano-LC-ESI-MS/MS analysis identified Tc -LysN as peptidyl-lys metalloendopeptidase from Trametes coccinea (Uniprot accession number A0A1Y2IQZ5). The N-terminal sequencing of the mature Tc -LysN indicated that the likely route of activation of Tc -LysN zymogen is the processing of the Kex2 site (KR) by an endogenous K. phaffii Kex2 peptidase or by another, as yet unknown, endogenous K. phaffii endopeptidase with trypsin-like activity. The latter hypothesis was tested by using trypsin to induce maturation of the Tc -LysN zymogen. In any applied dosage, trypsin was found to be unable to activate the Tc -LysN zymogen within the given incubation period (1–6 h). This result appears to be in line with Ødum et al . ’s (2016) findings about recombinant Am -LysN also expressed in K. phaffii —with the notable exception that they were able to isolate only the mature LysN from their culture medium, while in this study, the presence of both, the LysN zymogen and the mature LysN, was detected. This could likely be explained by the fact that in this study, S. cerevisiae ’s α-mating factor was linked to the native pro-protein gene for Tc -LysN by a Kex2 cleavage site. The competition for cleavage at the two Kex2 sites (between α-mating factor and pro-protein and between pro-peptide and mature protein) could have resulted in limited maturation of the Tc -LysN zymogen.
The observed MW of Tc -LysN is consistent with the MWs of previously studied LysN peptidases (Wingard et al. 1972 ; Nonaka et al. 1995 ; Saito et al. 2002 ; Stressler et al. 2014 ; Ødum et al. 2016 a). The Tc -LysN activity obtained at the end of purification was ~40 ACU under optimal conditions, which corresponded to an activity yield of ~9%. Due to the unavailability of data about enzyme activities of recombinant LysN peptidases reported in other studies, a comparison between enzyme activities could not be made. The concentration of the purified Tc -LysN was ~1.3 mg*L −1 , which is 5.2 times higher than the hexa-histidine-tagged recombinant Am -LysN expressed in K. phaffii (Ødum et al. 2016 ) . The use of a 5 kDa membrane for cross-flow filtration during downstream processing could have resulted in some loss of the ~19.8 kDa Tc- LysN resulting in the reduced final yield. The recombinant Am- LysN was purified to the final concentration of ~0.25 mg*L −1 from minimal glucose medium, but data about its activity yield were not reported (Ødum et al. 2016) . The hexa-histidine-tagged Gf -LysN from G. frondosa was also recombinantly expressed in K. phaffii , but data about its activity yield and final concentration were not reported (Saito et al. 2002 ). My- LysN was purified from the culture medium of Myxobacter AL-1 with the final activity yield of ~4% after 12 purification steps (Wingard et al. 1972 ). Gf -LysN was purified from the fruiting bodies of G. frondosa in four steps with the final activity yield of ~25% (Nonaka et al. 1995 ). Gf- LysN was also partially purified from the fruiting bodies of G. frondosa to the final concentration of 2 mg*L −1 in three purification steps with the final activity yield of ~0.2% (Stressler et al. 2014 ). More recently, Gf- LysN was purified from the fruiting bodies of G. frondosa to the final concentration of 500 mg*L −1 in six purification steps with the final activity yield of ~0.6% (Zhao et al. 2020 ). A valid comparison between the yields of recombinant LysN peptidases and yields of LysN peptidases purified from homogenates of the fruiting bodies of the basidiomycetes could not be made since the dry mass of the fruiting bodies and its initial processing varied significantly from one study to another.
Biochemical characterization
Using the model substrates, azocoll or azocasein, My -LysN, Gf -LysN, and Am -LysN were reported to possess a neutral to alkaline pH optimum. Gf -LysN was reported to have a pH optimum of 9.5 with azocasein as substrate (Nonaka et al. 1995 ), while Stressler et al. ( 2014 ) reported <15% azocaseinolytic activity below pH 7.0 for Gf -LysN. The LysN isolated from Pleurotus ostreatus ( Po -LysN) was reported to have a pH optimum of 5.6 with azocasein as substrate; however, this was revised to pH 8.5 later (Dohmae et al. 1995 ). Data about Po -LysN’s operational pH range was not made available. Tc -LysN exhibited the highest activity in 50 mM sodium acetate pH 5.0 (100%; Fig. 2 A) with azocasein as substrate while it retained >60% azocaseinolytic activity between pH values 4.5—7.5. Below pH value 4.5 and above pH value 9.0, the proteolytic activity decreased moderately (>20%<40% proteolytic activity), indicating that Tc -LysN is active within a broader pH range when compared to Gf -LysN. It should be noted that Tc -LysN maintained >30% activity between pH values 9.0—10.0, indicating that it is still applicable at alkaline pH using prolonged incubation.
To the best of our knowledge, Tc -LysN is the first peptidyl-lys metalloendopeptidase being reported to work optimally within the acidic pH range while still maintaining proteolytic activity up to pH 8.5 (>40% enzyme activity). Gf- LysN and trypsin work optimally within the alkaline pH range; the latter is reversibly inactivated below pH 4.0. The broader operational pH range of Tc- LysN could enable the exploration of acidic buffer systems for MS-based proteomics applications. For example, Tc -LysN could be ideally suited for disulfide mapping experiments due to its acidic pH optimum at pH 5.0. Sample digestion at pH 5.0 prevents disulfide exchange reactions that usually take place at alkaline pH. Avoiding disulfide exchange reactions might lead to a lower number of false positive identified disulfide bridges (Tsai et al. 2013 ).
Except for Gf -LysN reported by Stressler et al. ( 2014 ), which had a temperature maximum of 55 °C, explicit data about temperature maxima for the other LysN peptidases could not be found. All LysN peptidases, however, were reported to be pH and temperature stable (Lewis et al. 1978 ; Nonaka et al. 1995 ; Nonaka et al. 1998 ; Ødum et al. 2016 ; Wingard et al. 1972 ). Tc -LysN demonstrated maximum proteolytic activity at 60 °C, while maintaining a wide operational range between 40 °C—70 °C (>50% proteolytic activity). Tc- LysN also retained 65% proteolytic activity after 18 h of incubation at 50 °C and even retained ~20% activity after 18 h of incubation at 80 °C. These results are comparable to the results reported by Stressler et al. ( 2014 ) for Gf- LysN . As demonstrated by Taouatas et al. ( 2010 ), varying the combination of incubation time and reaction temperature can result in different numbers of identified peptides during MS analyses. The wide working pH and temperature range of Tc -LysN could, therefore, enable the alteration of experimental parameters within the proteomics workflow with more freedom as compared to trypsin. Currently, trypsin has to be glycated to increase its thermostability to withstand the elevated temperatures during denaturation of native proteins that do not digest readily (Pham et al. 2008 ). The innate ability of Tc- LysN to withstand higher temperatures could make it a more robust proteolytic enzyme to analyze samples that require harsher denaturation steps within the proteomics workflow.
Organic solvents such as acetonitrile and methanol were found to enhance Tc- LysN’s activity, where acetonitrile almost doubled the measured azocaseinolytic activity while methanol enhanced the activity up to ~50%. Consistent with the results of Taouatas et al. ( 2010 ), high concentrations of guanidine HCl were found to be detrimental to Tc- LysN while even 8 M urea had no significant impact on Tc- LysN’s proteolytic activity. The exceptional tolerance to high concentrations of urea is notable since higher concentrations of urea can aid the replacement of sodium dodecyl sulfate (SDS), which is used as the primary denaturant in the filter-aided sample preparation (FASP) before MS analysis. High concentrations of urea keep the proteins denatured and soluble when SDS is washed off. Owing to its chaotropic properties, urea also reduces the size of SDS micelles which would otherwise block the pores of the membrane filter (Wis et al. 2009 ). Reducing agents such as β -mercaptoethanol and dithiothreitol (DTT) had no significant inhibitory effect on Tc -LysN’s activity even at a concentration of 10 mM, while 1 mM DTT reduced Gf- LysN’s activity by 78% (Nonaka et al. 1995 ). This indicates that potential disulfide groups do not appear to be critical for the proteolytic activity of Tc -LysN (Degraeve and Martial-Gros 2003 ). According to Taouatas et al. ( 2010 ), the proteolytic activity of Gf- LysN decreased significantly below pH 6.5 and the enzyme was essentially inactivated at pH 3.5. In comparison , the Tc- LysN reported in this study retained 75%–80% activity at pH 6.5.
Unlike previous reports about LysN peptidases (Nonaka et al. 1995 ; Stressler et al. 2014 ; Wingard et al. 1972 ), Tc- LysN was found to be considerably resistant to inactivation by the potent metal chelator, EDTA. Complete inhibition of Tc- LysN was only observed after the metalloendopeptidase was incubated with 33.33 mM EDTA overnight at 4 °C, whereas Gf -LysN was completely inactivated by 10 μM EDTA (Stressler et al. 2014 ). In contrast, Tc -LysN retained ~30% residual activity after being incubated with 10 mM EDTA at 60 °C for 10 min (data not shown).
Zinc has been reported to be the natural cofactor of Gf -LysN and Am -LysN. The addition of zinc and other metal ions to apo- Gf -LysN has been shown to restore enzymatic activity and also induce changes in thermostability. Contrary to Nonaka et al.’s (1995) findings about Gf- LysN, where the addition Co 2+ reduced Gf- LysN’s activity by 40%, Tc- LysN’s activity was enhanced by Co 2+ more than any other divalent metal ion (~161%). It would be of interest to evaluate whether or not the activation of apo- Tc- LysN by Co 2+ and other metal ions could alter Tc- LysN’s biochemical properties significantly.
The observed inhibitory effect exerted by Tris-HCl on Tc- LysN’s activity could be attributed to the metal ion chelation properties of tris(hydroxymethyl)aminomethane (Tris) due to the presence of primary amines in its structure (Desmarais et al. 2002 ; Fischer et al. 1979 ). A similar inhibitory effect of Tris on Gf -LysN’s activity at pH 9.0 was reported by Stressler et al. ( 2014 ).
Sample preparation remains one of the most crucial initial steps within proteomics workflows. A typical proteomics analysis begins with the proteolytic digestion of all proteins present in a given sample. The resulting peptide mixture is usually separated by chromatography methods and subsequently analyzed by mass spectrometry (LC-MS) (Tsiatsiani and Heck 2015 ). Even though trypsin is still the most commonly employed peptidase for sample preparation in proteomics workflows, it has some limitations. Reports have indicated that the alkaline conditions optimized for digestion by trypsin cause disulfide bond rearrangement in disulfide mapping experiments (Sanger 1953 ). This problem can be mitigated by performing the digestion with trypsin at suboptimal pH conditions or, alternatively, by using a peptidase that works well under acidic conditions. It is, therefore, important to explore new peptidases that can handle a wide variety of sample preparation conditions within the proteomics workflow without losing their proteolytic efficiency. Tc -LysN was evaluated for its use in proteomics experiments. Bovine serum albumin (BSA) was hydrolyzed with Tc -LysN at pH 5.0. The subsequent nano-LC-MS/MS analysis revealed high sequence coverage (84%) of the BSA sequence with Tc -LysN peptides (Online Resource 5 ). Comparable sequence coverage of BSA (90%) was observed when trypsin was used as peptidase in a control experiment (data not shown), however, a higher number of unique peptides was observed. This indicates that Tc -LysN may be used in proteomics experiments equally well compared to established peptidases like trypsin. Further experiments will be needed to explore the full potential of Tc -LysN for proteomics applications, which are beyond the scope of this manuscript. Both trypsin and Gf -LysN are able to tolerate relatively harsh conditions, but both are restricted by their alkaline pH optimum. With its robust biochemical characteristic, its wide working pH range, alongside its acidic pH optimum, Tc -LysN could be employed to digest samples where alkaline conditions are not best suited. | Abstract
A novel peptidyl-lys metalloendopeptidase ( Tc -LysN) from Tramates coccinea was recombinantly expressed in Komagataella phaffii using the native pro-protein sequence. The peptidase was secreted into the culture broth as zymogen (~38 kDa) and mature enzyme (~19.8 kDa) simultaneously. The mature Tc -LysN was purified to homogeneity with a single step anion-exchange chromatography at pH 7.2. N-terminal sequencing using TMTpro Zero and mass spectrometry of the mature Tc- LysN indicated that the pro-peptide was cleaved between the amino acid positions 184 and 185 at the Kex2 cleavage site present in the native pro-protein sequence. The pH optimum of Tc -LysN was determined to be 5.0 while it maintained ≥60% activity between pH values 4.5—7.5 and ≥30% activity between pH values 8.5—10.0, indicating its broad applicability. The temperature maximum of Tc -LysN was determined to be 60 °C. After 18 h of incubation at 80 °C, Tc -LysN still retained ~20% activity. Organic solvents such as methanol and acetonitrile, at concentrations as high as 40% (v/v), were found to enhance Tc -LysN’s activity up to ~100% and ~50%, respectively. Tc -LysN’s thermostability, ability to withstand up to 8 M urea, tolerance to high concentrations of organic solvents, and an acidic pH optimum make it a viable candidate to be employed in proteomics workflows in which alkaline conditions might pose a challenge. The nano-LC-MS/MS analysis revealed bovine serum albumin (BSA)’s sequence coverage of 84% using Tc -LysN which was comparable to the sequence coverage of 90% by trypsin peptides.
Key points
• A novel LysN from Trametes coccinea (Tc-LysN) was expressed in Komagataella phaffii and purified to homogeneity
• Tc-LysN is thermostable, applicable over a broad pH range, and tolerates high concentrations of denaturants
• Tc-LysN was successfully applied for protein digestion and mass spectrometry fingerprinting
Supplementary Information
The online version contains supplementary material available at 10.1007/s00253-023-12986-3.
Keywords
Open Access funding enabled and organized by Projekt DEAL. | Supplementary information
| Author contribution
TE and KO conceived and supervised the research. TS and DH designed the synthetic gene constructs. JP and BW performed mass spectrometry experiments and drafted sections of the manuscript specific to those experiments. UA performed all molecular biology and biochemistry experiments. UA took the lead in writing and finalizing the overall manuscript. All authors analyzed data and provided critical feedback to the manuscript. All authors read and approved the manuscript.
Funding
Open Access funding enabled and organized by Projekt DEAL. No dedicated funding source was utilized in this study. The Exploris 480 mass spectrometer was funded in part by the German Research Foundation (DFG-INST 36/171-1 FUGG).
Data availability
All data supporting the findings of this study are available within the paper and its accompanying Online Resource. The synthetic gene sequence for Tc -LysN was deposited into the GenBank database under accession number OR161067.
Declarations
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Conflict of interest
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:52 | Appl Microbiol Biotechnol. 2024 Jan 13; 108(1):1-12 | oa_package/30/09/PMC10787681.tar.gz |
|
PMC10787682 | 38217685 | Introduction
Bibliometrics is one of the few subfields involved in the measurement of science outputs [ 1 ]. Bibliometric indicators are useful tools for evaluating research performance, provided they are precise, advanced, up-to-date, combined with expert knowledge, and interpreted and applied with care [ 2 ]. Citation analysis is a principal bibliometric approach [ 2 ]. Citations may not fully reflect the quality of a work, but highly cited articles often present new ideas or address important problems, so they are valuable in the scientific world. Additionally, the frequent citation of an article could be a strong indication of its reliability as a source for researchers to substantiate their methods or arguments [ 3 ].
Since 1945, the Institute of Scientific Information (ISI) has been collecting bibliometric data from published scientific papers, but their collection was not launched until the Science Citation Index (SCI), a special tool for measuring citations, was first published in 1962 [ 4 ]. Today, the most widely used databases for bibliometric studies are the citation indexes produced by Thomson Reuters, especially Web of Science (WoS) and its predecessor, the SCI [ 2 ]. Google Scholar, a tool sponsored by the Internet search company Google, was created to provide users with a simple way of searching a broad range of scientific literature. Google Scholar employs a matching algorithm to search for keyword search terms in the title, summary, or full text of an article from various publishers and websites [ 5 ]. Around the same time Google Scholar was announced to the public, Elsevier introduced Scopus, an indexing and abstraction service that includes its own citation-tracking tool. Scopus has reportedly indexed more journals than WoS has and included more international and open-access journals [ 5 ].
Altmetric ( https://www.altmetric.com ) is powered by Digital Science, a Macmillan company that focuses on technology to aid scientific research. It collects data from three primary sources: social media (e.g., Twitter, Facebook, Google, Pinterest, and blogs); traditional media, both mainstream (e.g., The Guardian and New York Times ) and science-specific (e.g., New Scientist and Scientific American ); and online reference managers (e.g., Mendeley and CiteULike). It also calculates the score of an article on the basis of its wager on those sources. This is an algorithm-calculated quantitative measure of the article's quality and amount of attention [ 6 ].
In early 2018, Digital Science & Research Solutions launched Dimensions, a novel online academic platform designed to provide a distinct viewpoint on research outcomes. Grant awards, journal and book publications, mentions of social media, academic citations, clinical trials, and commercial patents are considered research outputs. The publication and citation contents at Dimensions are created and constantly updated by integrating data from multiple sources, including multiple clinical trial records, open-access articles, indexes covering many scientific journals, databases with content licenses, and open-access databases [ 7 ].
Numerous citation analyses and the most cited articles have become available in dentistry, including areas such as caries [ 8 , 9 ], bulk-fill composites [ 10 ], endodontics [ 11 – 17 ], implants [ 18 – 20 ], pediatric dentistry [ 21 , 22 ], periodontology [ 23 , 24 ], oral medicine and radiology [ 25 – 27 ], and orthodontics [ 28 ], and in topics such as dental traumatology [ 12 , 29 ], tooth wear [ 30 ], minimally invasive dentistry [ 31 ], orofacial pain [ 32 ], and dental education [ 33 ]. Some citation analysis studies have included articles published in multiple dentistry journals [ 4 , 34 – 38 ] or in a single dentistry journal [ 13 , 39 ].
Dentin adhesives appear to have made tremendous progress over the years since adhesives were first introduced in 1955 by Buonocore in a study on the bonding of resins to etched enamel surfaces and later after the introduction of resin bondings to adhere to etched dentin by Fusuyama et al. [ 40 – 42 ]. Dental adhesive technology is constantly evolving with the rapid changes in commercial adhesives. These developments are the result of numerous laboratory and clinical studies, and the data obtained are highly important in showing the potential success of these materials and in guiding future research [ 43 ].
The basic mechanism of bonding to enamel and dentin involves the replacement of resin monomers with the minerals removed from the dental hard tissues, which cause porosity, and upon setting, micromechanical interlocking occurs in the formed porosities [ 44 ]. Adhesives can be classified as “etch and rinse” or “self-etching” depending on the underlying adhesion strategy, and the degree of substance exchange varies significantly among these adhesives [ 44 ]. Nevertheless, the success of both adhesion strategies has been reported in both laboratory and clinical research. However, it’s important to note that their effectiveness may depend on the specific product being used [ 45 ].
To date, no bibliometric analysis has been carried out to provide a more comprehensive perspective to evaluate research on various topics in the field of dentin adhesives, enabling us to anticipate future advancements and direct research efforts in this area. Thus, the purposes of this study were to gain insight into the scientific interests, research trends, and development within the field of dental adhesives by using WoS, Scopus, Google Scholar, Altmetric, and Dimensions. | Materials and methods
To identify the most cited articles on dentin adhesives, our study was conducted in two stages, in which bibliometric and altmetric analysis data were collected. Institutional ethics committee approval was not necessary because the data used in this study were obtained from publications.
Initially, the WoS database ( http://www.webofknowledge.com ) was used for the bibliometric analysis. On February 12, 2023, a search was conducted in the "Web of Science Core Collection (WoSCC)" using the search terms listed in Table S1 , starting from the year 1945. The most commonly used free and Medical Subject Headings (MeSH) terms in the published literature on dentin adhesives were combined to create keywords. The field tags as “Topic” were selected, and the search resulted in 142,494 articles ranked according to the first option with the highest number of citations. Then, respectively, the search was restricted to articles written in the English language ( n = 137,996), and ‘Science Citation Index Expanded (SCI-E)’ and ‘Emerging Science Citation Index (ESCI)’ index limitations were applied, resulting in 123,086 articles. The document types “article” and “review article” were selected ( n = 115,845). After screening the articles, all studies were exported into the Excel program as a full record.
After ranking the articles according to their numbers of citations in the WoS database, two independent researchers (F.K. and M.D.) reviewed the titles and abstracts of the articles to identify the candidates for full-text review. Apart from the restrictions set, the eligibility criteria consisted mainly of studies that had data or topics that directly included dentin adhesives. The first 100 articles with the highest number of citations according to the criteria were identified independently by the two researchers (F.K. and M.D.). All results were cross-checked, and inconsistencies were resolved after reading the full texts of the articles and reviewing the relevant literature. The inter-examiner agreement was quantified using the kappa coefficient.
After the top 100 most cited articles were identified, the citation counts were manually retrieved for each article from the Scopus ( https://www.scopus.com/ ), Google Scholar ( https://scholar.google.com ), and Dimensions databases ( https://app.dimensions.ai ) on the same date to provide a more comprehensive view, as the citation count of the same article may vary on different dates (date of access: March 3, 2023).
For the altmetric analysis, the Altmetric Attention Score (AAS; a metric that automatically calculates the weighted count of social media attention received by a research output) was used. The 100 most cited articles were accessed by manually scanning the Altmetric Explorer database ( https://www.altmetric.com ) through the “Advanced Search” option using “publication title” or “DOI” simultaneously (date of access: March 3, 2023). A donut graph with different colors representing the amount of attention given to the different types of output was constructed with the AASs. Articles that were found in the database but were not cited in other articles and those that were added to the database either by institutional implementation or through a non-scoring source were displayed in the donut with a question mark. If the article was not mentioned at all in any article or if this output did have a score at one point but had been removed/reduced because of changes in the number of mentions, it was represented with “0” in the altmetric donut. At this point, there would be no difference in that both cases would indicate having no tracked attention or altmetric score assigned to the research output (help.altmetric.com).
The top 100 most cited articles are shown in Table 1 according to their numbers of citations as indicated in the WoSCC database, from highest to lowest, including results from all databases searched. As the numbers of citations were the same, our top 100 list consisted of 101 articles. After the final list was confirmed, the top 100 most cited articles were analyzed by the researchers, who recorded the number of citations, publication name (title), year of publication, journal name and impact factor, author(s) (name, number, and authorship position), country, institution, and type and field of study. When the article analysis results were discrepant between the two independent researchers, a consensus decision was reached through a discussion.
More recent articles were listed with priority for articles with the same numbers of citations. The list of journal names was arranged in order of their numbers of top-cited articles, and the Journal Impact Factor (JIF) 2021 from the Journal of Citation Reports ( https://jcr.clarivate.com ) was used to rate journals with the same numbers of articles (Table S2 ). The institute of origin was based on the address of the first author's affiliation. If the first author worked at more than one institution that belonged to more than one country, each institution and country were counted. The type of study was classified as clinical, basic, review, systematic review, meta-analysis, or lecture based on the article type. To determine the area of study, the full text of each article was carefully examined by identifying concepts based on MeSH terms from PubMed.
The Visualization of Similarities (VOS) Viewer software program (version 1.6.15; Centre for Science and Technology Studies, Leiden University) was used to analyze the co-authorship network and journals. SPSS version 21 (IBM Corporation, USA) was used for the statistical analysis of the frequencies of the descriptive measures. | Results
The top 100 most-cited articles are listed in Table 1 according to the number of citations. The most cited article, published in 2003 by Van Meerbeek et al. in Operative Dentistry, had 1288 citations and was a lecture on adhesion to enamel and dentin (Table 1 ). The least-cited article had 198 citations. The top 100 most cited articles had a total of 34,526 citations, and the mean number of citations per article was 342.
Journals and years of publication
The top 100 cited articles were published in 18 journals, all in the English language. Nine of the 18 journals had each published only one of the 100 most cited articles, while three other journals had each published two articles. The other 6 journals that published at least 3 of the most cited articles are shown in Fig. 1 . The impact factors of the six journals were between 2.16 and 15.304. The journal with the highest number of top-cited articles ( n = 33) was the Journal of Dental Research, followed by Dental Materials ( n = 32) and the Journal of Dentistry ( n = 8). The top 100 most cited articles were published between 1967 and 2018 (Fig. 2 ). Sixty-two of these articles were published between 2000 and 2010. The year 2005 had the highest number of top-cited articles ( n = 12), followed by 2003 ( n = 10), 2002, and 2010 ( n = 7). The oldest article, written by Gwinnett et al., was published in the Archives of Oral Biology in 1967. The newest article was written by Breschi et al. and published in Dental Materials in 2018.
Authors, countries, and institutions of origin
In total, 244 unique authors contributed to the 100 most cited articles. Five articles were attributed to a single author; 13 articles to two authors; and 83 articles to three or more authors. The top 100 list consisted of 65 different first authors. The most cited articles were published by Van Meerbeek B. (9 articles; 4650 citations), followed by those by Pashley D.H. (6 articles; 2769 citations) and Tay F.R. (6 articles; 2184 citations) (Table 2 ). Regarding the total author network, Pashley D.H. was leading with 37 articles and 12517 citations, followed by Tay F.R. (30 articles; 9607 citations), Van Meerbeek B. (24 articles; 11,088 citations), Carvalho R.M. (19 articles; 6804 citations), De Munck J. (16 articles; 8444 citations), and Lambrechts P. (16 articles; 7811 citations; Fig. 3 ).
The first author's address was used to ascertain the country of origin. Accordingly, the top 100 articles originated from 16 countries (Table 3 ), of which Japan had the highest number of articles (25 articles; 7847 citations), followed by Belgium (20 articles; 9572 citations), the United States (18 articles; 5805 citations), Italy (9 articles; 2784 citations), and Brazil (6 articles; 1883 citations).
On the basis of the first authors' addresses, 38 institutions contributed to the top 100 most cited publications, of which 10 had at least 3 publications (Table 4 ). Among the 10 institutions, the most contributions were made by the Catholic University of Leuven (20 articles; 9572 citations), followed by Tokyo Medical and Dental University (10 articles; 3118 citations), the University of Hong Kong, and Prince Philip Dental Hospital (7 articles; 2393 citations).
Type and field of study
With 69 articles, basic science research had the highest number of articles among the top 100 most cited articles. Twenty-five articles were reviews, 3 articles were systematic reviews, 1 article was a meta-analysis, 1 article was a systematic review and meta-analysis, 1 article was a lecture, and 2 articles reported clinical trials (Table 5 ). One of the two clinical trials included both in vivo and in vitro studies. The major topic of interest in the top 69 most cited basic science articles was the ultramorphological structures of dentin and adhesive interfaces (39 articles), followed by bond strength to dentin (34 articles) and hybrid layers (25 articles). The major topic of interest in the top 25 most cited review articles was the hybrid layer (11 articles), followed by the ultramorphological structures of dentin and adhesive interfaces (8 articles) and bonding to dentin (7 articles). Of the two clinical studies, one was related to the clinical performances of total-etch adhesive systems, and the other was on the clinical performance of multimode adhesive systems (Table 5 ).
Altmetric assessment
Among the top 100 most-cited articles, 43 had AASs. Forty-nine articles had interactions that were not mentioned, and nine had interactions that were not included in the calculation of the AAS. The AASs of the 43 articles were as follows: 1–5 in 27 articles, 6–10 in 12, and 10 or higher in 4. The article with the highest AAS (24), a review on dentin adhesive/aging written by Breschi et al., was published in Dental Materials in 2008. This is followed by a meta-analysis on clinical performance written by Heintze et al. and published in the Journal of Adhesive Dentistry in 2012 (AAS = 15). | Discussion
In our study, the citation count of the top 100 articles was between 1288 and 198 on WoS, between 1464 and 208 on Scopus, between 3118 and 276 on Google Scholar, and between 1100 and 184 on Dimensions. The total number of citations was highest on Google Scholar, followed by Scopus, WoS, and Dimensions. However, the number of citations for the same article differed between the databases. In addition to scientific articles, Google Scholar includes citations from books, theses, and other works, so the results from the database should be interpreted with caution. Currently, Scopus only counts citations from 1996 onwards, which is a major shortcoming for identifying the most cited journal articles, but expansion of the citation count to before 1996 has been planned for the near future [ 29 ]. On the other hand, the database indexes more international and open-access journals than the WoS [ 5 ]. In accordance with another study, the Dimensions database was assessed using a free application that does not provide entry to the website's functionalities, including grants, patents, clinical trials data, and analytical tools [ 7 ]. By contrast, citations were collected using the complete versions of WoS and Scopus [ 46 ]. Moreover, while the number of citations on WoS, Scopus, and Dimensions showed no correlation with the AASs, the number of citations on Dimensions strongly correlated with those on WoS and Scopus. Both Dimensions and Altmetric can provide a more comprehensive assessment of research effects [ 46 ]. In parallel with the main logic of our study, the “all databases” section of the ISI Web of Knowledge database was selected as the main database in other studies because it can count citations in scientific articles over a wide period from 1945 to the present [ 12 , 22 , 29 ]. In our study, the number of citations was lower than those in studies conducted in different dentistry areas such as endodontics (between 2115 and 246 citations) [ 17 ] and implant dentistry (between 2229 and 199 citations) [ 19 ], but higher than those in other studies on dentistry areas such as pediatric dentistry (between 182 and 42 citations) [ 22 ], oral medicine and radiology (between 624 and 86 citations) [ 26 ], and orthodontics (between 545 and 89 citations) [ 28 ]. In fact, the citation rates differed for each specialization depending on the number of researchers working in a specific field [ 15 ]. In addition, the wide variety of subdisciplines in specific fields may be another influencing factor in the citation rate.
Of the most cited articles in our study, 89.1% (90 articles) were published before 2010. Our findings are consistent with those of other studies [ 15 , 17 , 19 , 28 , 29 ]. Contrary to our findings, the most cited articles in some studies were published in the past decade [ 11 , 12 , 22 ]. The oldest articles have more time to be cited than recent articles, regardless of their scientific significance, hence the risk of exemption from the recent influential articles [ 25 ]. As supported by our findings and previous studies, it can be considered that an article needs a publication period of at least 6 to 15 years to receive sufficient citations and become a citation classic [ 19 ]. This may explain why none of the 100 most cited articles in our study were published in the last 5 years. According to Kuhn's philosophy, the scientific community has a tendency to stick to a paradigm[ 47 ]. In this context, this means that citations have a “snowball effect” because other authors are more inclined to cite articles on the basis of their numbers of earlier citations and not their content or quality [ 48 ]. On the other hand, a publication with more than 400 citations should be considered a classic, but in some areas where researchers have fewer, 100 citations may merit a study [ 15 , 49 ]. The first 13 most cited articles in our study were cited more than 400 times, whereas the 100th article was cited 198 times. Therefore, in our study, the attribution rate was influenced not just by the snowball effect but also by the article's content or quality. Moreover, when the AASs were analyzed, the rate of mentioning articles published after 2010 on social media was 23.3% (8 articles), which is higher than the citation rates. One study found a high correlation between the citation count in Dimensions and those in WOS and Scopus but found no correlation between the citation counts in WOS, Scopus, Dimensions, and Altmetric [ 46 ]. Another study reported a weak but positive correlation between the AAS and the number of citation [ 50 ]. In Altmetric, behavior is completely different from the classic citation system, allowing recently published works to achieve more recognition and visibility quickly. Thus, Altmetric can highlight newly published research articles with higher prevalence rather than top-cited articles, which are usually at least 1 or 2 decades old [ 46 ].
In our study, 70% of the first 100 most cited articles (71 articles) were published in journals with an impact factor greater than 5 and a high impact factor for the field of dentistry. Except for one, the other 29 most cited articles were published in journals with impact factors higher than 2, of which 15 were in journals with impact factors greater than 3, which indicates a relatively high impact. This result was consistent with those of other studies [ 4 , 24 , 35 ]. It is well known that researchers choose high-impact journals for their article submissions and that journals with high impact factors attract high-quality articles [ 26 ]. However, no correlation was found between the journal impact factor and the number of articles that received the most citations [ 17 , 26 ]. On the contrary, the number of citations and the relevant impact factor have been found to be closely correlated in a limited number of journals, especially in areas with high citation intensity [ 4 , 24 ]. This can be attributable to the fact that articles with high citation rates tend to be published in journals with high impact factors [ 35 ]. In addition, more than a third of the articles have been published in specialty journals, including the subjects of our study, and this result may justify why fewer journals have attracted more attention [ 26 ]. Therefore, this conforms to Bradford's law, which explains why only a few journals in a subject area are most frequently cited and consequently most likely to be of interest to researchers in the discipline [ 22 , 50 , 51 ]. In line with our findings, similar results have been observed in other studies [ 17 , 25 , 26 ].
This study shows that 25 of the 100 most cited articles originated in Japan. The introduction of resin bonding to etched dentin by Fusuyama et al. [ 41 ], along with extensive research conducted in the following decade, and later, the definition of hybrid layer by Nakbayashi [ 52 ], had a significant influence on most of the highly cited articles, all of which had Japanese origins. In our study, 20 of the 100 most cited articles were affiliated with the Catholic University of Leuven in Belgium and were published between 1992 and 2012. This was followed by 10 articles from Tokyo Medical and Dental University, spanning the years 1979 to 1999, and 7 articles from the University of Hong Kong, Prince Philip Dental Hospital, covering the period between 1996 and 2005. These universities are particularly focused on the subspecialty of dental adhesion. Remarkably, although nearly one-fifth of the 100 most cited articles were produced by institutions in Japan, the most cited articles were from Belgium (Catholic University of Leuven), particularly considering that Japanese articles were among the earliest and most pioneering contributions to the field. Despite Belgium's modest population, researchers from this country have been comparatively prolific in operative dentistry-related publications during the study period [ 29 , 53 ], aligning with our finding that researchers affiliated with this center had two or more highly referenced articles (Fig. 3 ). Also, the reasons for the high citation rates of Belgian articles could be attributed to factors such as international collaboration, research infrastructure, and visibility within the global scientific community. In addition, in line with the results of other studies [ 15 , 17 ], approximately one-third of the most cited articles (28 articles) in our study were produced by independent institutions. It's essential to consider the extent of international collaboration in dentin adhesive research. Articles resulting from collaborative efforts between researchers from various countries might have received more citations due to their diverse perspectives and broad relevance.
In our study, most of the top-cited articles were in the field of basic research (69 articles), followed by reviews (25 articles) and systematic review and/or meta-analysis (5 articles). Only two of the top cited articles reported clinical experiences. Consistent with our findings, other studies have reported that most of the top-cited articles were in the field of basic science [ 15 , 17 , 39 ]. On the other hand, other studies found that most top-cited articles reported clinical experiences [ 4 , 19 , 25 , 28 ]. However, one study found that the most top-cited articles were reviews [ 13 ]. These differences may be due to differences in subspecialties in the field of dentistry. Most of the top-cited articles in our study were in the field of basic science. In the early stages of dentin adhesive development, the papers that formed the foundation of the field generally focused on basic research, investigating the principles of adhesion, the composition of adhesives, and their interactions with dentin. Some of the pioneering articles from this period, while groundbreaking, may have been more cited because of their age. Basic research in dentin adhesives, a subspecialty of operative dentistry, is crucial to investigating the efficacy of new materials or modified techniques [ 15 ]. In vitro studies play an important role in enhancing methods and providing early data on which later research with greater evidence can be based [ 11 ]. In our study, most topics in basic science were on the ultramorphological structures of dentin and adhesive interfaces (39 articles), followed by bond strength to dentin (34 articles) and hybrid layers (25 articles). The integration of knowledge from new basic science research into the subspecialty practices of operative dentistry provides the opportunity to address major clinical issues [ 15 ]. However, the fact that our study detected very few systematic reviews, meta-analyses, and RCTSs among the most cited papers suggests that more such studies on dentin adhesives are needed.
As with other citation analyses, this study has some limitations. By including many databases, the differences in the number of citations between databases were tried to be eliminated. The current study excluded several articles, as indicated by the title, due to its focus on including only the top 100 most-cited articles. In addition, articles written in languages other than English and books or conference proceedings as document type were not included in the study. | Conclusion
Most top-cited articles (89.1%) were published before 2010. In our study, the most frequently cited articles were concentrated in a few journals. As first author, Van Meerbeek B. has the highest number of articles with nine articles and a total number of 4650 citations. The highest top-cited 100 articles originated from Japan. The most top-cited articles originated from the Catholic University of Leuven in Belgium. Basic science research had the highest number of articles, followed by reviews. The primary foci of basic research were the ultramorphological structures of dentin and adhesive interfaces. The major topic of the reviews was hybrid layers. Only 2 RCTs and a few systematic reviews and meta-analyses were published. Thus, in the future, studies with high levels of evidence, such as systematic reviews, meta-analyses, and RCTs, are required. | Objective
This study aimed to identify the 100 top-cited articles on dentin adhesives utilizing comprehensive bibliometric and altmetric analyses.
Materials and methods
The Institute of Scientific Information Web of Knowledge database was used to compile the top-cited articles published from 1945 through February 12, 2023. Citation counts were manually retrieved for each article from Scopus, Google Scholar, Dimensions, and Altmetric. The articles were analyzed in terms of their number of citations, year, journal name, author (name, institution, and country), and type and specific field of study. We used descriptive statistics to summarize the results.
Results
The analysis revealed that the top 100 cited articles originated from 18 English-language journals and collectively accumulated a remarkable 34526 citations. The article with the highest number of citations garnered 1288 references. Among authors, Van Meerbeek B. stood out with nine articles and 4650 citations, followed by Pashley D.H. with six articles and 2769 citations. Japan was the leading contributor by country, while the Catholic University of Leuven led in terms of institutions with 20 articles.
Conclusion
According to this study, basic research and review articles garnered the most citations, respectively. The citation analysis revealed different trends for researchers, the first being that researchers have focused on basic fields such as the ultramorphology of dentin and adhesive interfaces, followed by bond strength to dentin. Two studies on clinical experiences suggested that studies with high-level evidence, such as systematic reviews, meta-analyses, or randomized controlled clinical trials, are required.
Clinical relevance
It is identified that more studies with high-level evidence-based research are needed in the field of dental adhesives.
Supplementary Information
The online version contains supplementary material available at 10.1007/s00784-024-05498-5.
Keywords
Open access funding provided by the Scientific and Technological Research Council of Türkiye (TÜBİTAK). | Supplementary Information
Below is the link to the electronic supplementary material. | Acknowledgements
We to thank Altmetric LLP (London, UK) for their support and permission to access the Altmetric data used in this study.
Author contributions
Conceptualization: [Mustafa Demirci]; Methodology: [Ferda Karabay]; Formal analysis and investigation: [Ferda Karabay, Mustafa Demirci]; Writing—original draft preparation: [Mustafa Demrici, Ferda Karabay]; Writing—review and editing: [Safa Tuncer, Neslihan Tekçe, Meriç Berkman].
Funding
Open access funding provided by the Scientific and Technological Research Council of Türkiye (TÜBİTAK). The author(s) received no financial support for the research, authorship, and/or publication of this article.
Declarations
Competing interests
The authors declare no competing interests.
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Informed consent
For this type of study, formal consent is not required. | CC BY | no | 2024-01-15 23:41:52 | Clin Oral Investig. 2024 Jan 13; 28(1):92 | oa_package/16/95/PMC10787682.tar.gz |
PMC10787683 | 38217625 | Introduction
Studies evaluating the pain prevalence among laparoscopic surgeons are rife in the literature; there are also multiple reviews on the matter [ 1 – 3 ]. However, more nuance is required to draw meaningful conclusions from this data. A previous meta-analysis found that 77.8% of surgeons practicing Traditional Laparoscopic Surgery (TLS) and 53.8% of those practicing Robot-Assisted Laparoscopic Surgery (RALS) experienced pain while operating. In the reviewed studies, the examination of contributing factors primarily focused on demographic or surgical factors, such as biological sex, age, experience, caseload, or operating room ergonomics, rather than tool usability [ 1 ]. Numerous studies have found increased risk of discomfort for female and small-handed surgeons, with one study reporting a sevenfold increase in the risk of pain or injury [ 4 ]. While this is concerning, further details are required to complete this picture. Most pain prevalence surveys only ask binary response questions, asking whether the surgeon experiences pain in a given region. Comparatively few studies investigate symptom frequency [ 5 – 9 ] or severity [ 10 – 12 ]. Conducting a similar study using Likert scale data and subgroup analysis will highlight how surgical ergonomics are impacted by biological sex and glove size. A Likert scale enables respondents to give a more nuanced, graded response to questions, often using a 1–5 scale from low to high impact. Subgroup analysis breaks down the study sample into subsets of participants in order to better understand if impacts differ across groups with similar traits.
There are limited survey data in the literature making connections between physical comfort, demographic factors, and tool usability. Wong et al. [ 13 ] showed that female surgeons were 5−25 times more likely to report the LigaSure device, a pistol-grip operated laparoscopic tool for sealing blood vessels, as being painful or too large to use compared to their male colleagues. The statistical significance of this trend disappeared after adjustment for glove size and other factors, demonstrating that this pattern was in large part due to the smaller female hand size [ 13 ]. Sutton et al. found that small-handed females experienced significantly more shoulder and neck discomfort than small-handed males [ 14 ]. Previous studies have shown that female surgeons consider TLS tools too large and awkward to use more frequently than their male colleagues [ 13 , 15 ], as do those with a glove size of 6.5 or below [ 16 , 17 ]. This often prompts them to adopt a modified one-handed or two-handed grip style [ 13 , 15 ]. The exact adaptations used for the modified one-handed grip were not captured in the studies and may have been different per impacted user. However, a clear correlation was reported between gender and an inability to use laparoscopic tools in the manner as designed [ 15 ]. Minimal information is available regarding the fit and ease of use of RALS hand controls, although Chiu et al. [ 18 ] showed that female trainees performed better on simulated suturing tasks using the da Vinci console than their male counterparts.
The excessive force required to operate TLS tools is well documented. Kasai et al. [ 19 ] demonstrated that 250 N was required to properly fire an anastomotic stapler. This level of force is sometimes unattainable for female surgeons due to their strength and grip diameter. Poor force transferral means that only 20% of the surgeon’s grip strength translates to the tool tip. The opposite phenomenon is experienced with RALS consoles. The lightweight controls require less applied force to grip and manipulate tissue. Johnson et al. [ 20 ] demonstrated that a da Vinci robotic grasper is closed when the hand controls are separated by 4.5°. Full hand control closure may only require approximately 5 Pound per Square Inch (PSI) from the surgeon while the grasper may be applying approximately 500 PSI to the tissue. Mucksavage et al. [ 21 ] showed that minor fluctuations in grip forces of less than 1 N may be attributed to the surgeon’s wrist position and prior wear on the robotic tools being used. Increased force variations may be caused by the console model and instrument type. Grip forces ranged from 2.26 to 39.92 N between tools and consoles. The physical separation and utter lack of haptic feedback mean that inexperienced surgeons may not perceive this.
Size, shape, required grip or positioning, and operating force will all have a bearing on the range of surgeons who can comfortably use particular instruments. This survey study aimed to examine intraoperative pain and injury alongside tool usability. The degree to which surgeon discomfort impacted the operation was examined as well as how tool design affected posture and performance during a procedure. Correlations were then explored based on the biological sex and glove size of participants. Evaluating how the experience of using commonly available tools and robotic handle controls differ with anthropometry will provide valuable insights into future surgical instrument design. | Methods
The survey contained questions on the presence and impact of intraoperative pain or stress, as well as the intuitiveness and comfort of surgical tools for TLS and RALS. Rather than the binary answers elicited in previous studies, responses here were based on how much the level of discomfort interfered with the procedure being performed. To consider tool usability, information was also collected about the pain experienced in 13 regions of the hand. Two questions from the Subjective Workload Assessment Technique (SWAT) [ 22 ] were incorporated to evaluate mental effort and stress. Additional questions regarding the ease of use of laparoscopic tools and robotic hand controls were also included. Table 1 contains a summary of the questions and answer formats.
A link to the Qualtrics survey was sent to all 8110 email addresses on the European Association for Endoscopic Surgery (EAES) mailing list. The survey was available for completion over a period of 6 weeks throughout July and August 2022. Reminders were sent every 1−2 weeks throughout this time. This study was approved by the Swinburne University Human Research Ethics Committee. An information statement was included in the survey front matter stating the investigator names and affiliations, funding source, question scope, and data handling. A declaration was also included that by commencing the survey, respondents were consenting to participation.
Participants were required to have experience in either TLS or RALS to have their responses considered in the survey. Valid responses were those answering at least one of the questions on intraoperative pain or tool usability. Completion of the demographic questions only was not acceptable for inclusion. Some responses required manual review, as entries were recorded after one week without additional participant input. The number of respondents to each section of the survey was considered during statistical analysis.
Microsoft Excel (Microsoft Corporation, Redmond, WA, USA) was used to obtain basic prevalence estimates for each question. Correlations within the data were determined in the RStudio statistical computing program (RStudio, Inc., Boston, MA, USA). Trends in the results based on biological sex or glove size were of particular interest. Fisher’s Exact Test [ 23 ] was used to find patterns in categorical data and determine whether a statistically significant association exists. It allows the testing of significance of categories against groups of participants. A p-value below 0.05 was considered, and referred to, as statistically significant. | Results
There were 323 valid responses from TLS surgeons collected over the six-week period; 102 respondents also had RALS experience. The most frequently used console for RALS was the da Vinci Xi (77.2%). Most commonly, participants were 47 ± 10.6-year-old males (83%) from Europe (84%), specifically Italy (22% of all respondents), who were 176 ± 8.3 cm tall, of medium (7 − 7.5) glove size (55.8%), and right-handed (87.6%). Female participants were, on average, 7.5 years younger and 12.1 cm shorter than their male colleagues, with significantly smaller glove size. Regarding experience, female participants had significantly fewer years’ experience in TLS ( p < 0.0005), although not RALS ( p > 0.05); there was no difference in weekly operating time based on biological sex for either modality. Respondent demographics are included in Tables 2 and 3 . The numbers of responses for a given section did fluctuate throughout the questionnaire due to experience (surgeons without RALS experience did not answer these questions), survey dropout, and participants not providing a relevant answer to open-ended or optional questions. The number of respondents for each section is provided in Table 1 alongside the question summary. The total responses for each question were taken into consideration during statistical analysis.
The shoulder and neck were the sites of the most complaints reported by TLS (70%) and RALS surgeons (39−52%); these were also the locations of the largest proportions of moderate and severe pain. TLS surgeons experienced a significantly increased severity and impact of pain compared to RALS surgeons for the neck, shoulders, upper and lower back, thenar area, proximal phalanx of the thumb, knees, and ankles and feet. Figure 1 shows the severity of pain reported by TLS and RALS surgeons.
Operating laparoscopically required moderate mental effort for 56−57% of surgeons regardless of modality. However, a significantly larger proportion of surgeons reported that the robotic console caused moderate to high stress and confusion compared to TLS (49.0 vs 34.9%, p < 0.03).
Overall, surgeons agreed on the usability of RALS equipment with significantly more consistency than TLS. This information is depicted in Fig. 2 . Most RALS surgeons agreed or strongly agreed on the perceived naturalness and comfort of their operating positions (68%); TLS surgeons had mixed reports, with 35% disagreeing on their intraoperative comfort, and 45% agreeing. The difference in the dispersion of answers between TLS and RALS surgeons was statistically significant ( p < 0.05). Surgeons experienced significantly more comfort viewing the RALS console display than a TLS monitor. Additionally, surgeons were significantly more likely to hold unnecessary tension while performing TLS compared to RALS. RALS controls were associated with intuitiveness and comfort at a significantly higher frequency than TLS tools. Surgeons were more likely to confirm that TLS required regular wrist hyper-extension and ulnar deviation compared to RALS ( p < 0.05). RALS surgeons found it more difficult than TLS surgeons to determine the level of force they were applying to tissue ( p < 0.05), although 34% still reported that this was easy to do.
Similar trends were found for surgeons that were female and those that were small-handed, as most female surgeons had a glove size of 6 or 6.5. Female and small-handed TLS surgeons reported increased headaches, as well as pain in the wrists and thenar area compared to their male colleagues. Those with a glove size of 6 or 6.5 experienced significantly more palm pain than larger-handed surgeons ( p < 0.05); this difference was not significant when stratified by biological sex. Female and small-handed surgeons were less likely to report that TLS tools fit their hands well compared to their colleagues ( p < 0.05). Female surgeons were twice as likely to find it uncomfortable looking at a TLS monitor throughout an operation (38.9 vs 19.6, p < 0.05). Male surgeons were significantly more likely to report mild pain in the ring and little fingers during RALS than female surgeons.
Two-hundred surgeons reported requiring interventions to investigate or alleviate the pain. Of these 200 participants 71% used pain medication, 39.5% were engaging in physiotherapy, 10−12.5% were taking leave, visiting a doctor, or receiving medical scans, and 4% required surgery. Intraoperative pain made 8.9% of surgeons consider ending their surgical career. Female surgeons reported utilizing these interventions and considering retirement at slightly higher frequencies than their male colleagues. | Discussion
Compared to a previously conducted meta-analysis [ 1 ], the prevalence rate of neck and shoulder symptoms of any severity was approximately 20% higher for both TLS and RALS. Several previous survey studies provide data on the perceived severity of intraoperative pain during TLS on a scale given in either a numerical or descriptive (i.e., from mild to severe) terms for different anatomical sites [ 10 – 12 ]. Tjiam et al. [ 10 ] found that the majority of surgeons experiencing symptoms considered them to be mild, which is consistent with the present study, whereas Wauben et al. [ 11 ] and Wells et al. [ 12 ] found that a notable proportion was experiencing moderate symptoms in various anatomic regions. The shortcoming of the rating scales used in these previous studies is highly subjective and lack context. This was addressed in the current study by referring to the pain in terms of how it impacted current and future tasks. The only data found in the literature on the severity of pain during RALS were an overall score presented by Wells et al. [ 12 ]. Therefore, the results presented here stratified by anatomic region provide important insight. The wrists, ring fingers, and small fingers were sites of significant differences in the pain reported by female and male RALS surgeons, suggesting that all RALS surgeons may benefit from refining the design of existing robotic controls.
Previous studies have surveyed surgeons regarding the perceived usability of particular TLS tools to investigate equipment design. These studies showed that female and small-handed surgeons find most tools more difficult to use than their male colleagues, and often must adapt their grip accordingly [ 13 , 15 ]. Results were expressed in terms of awkwardness and fit, which provide limited insight on how instrument design impacts operative success. The present study posed questions on more practical concepts including positioning, intuitiveness, force, tool fit, and fatigue. These questions were intended to have clear implications for the ease of operating using TLS and RALS. Additionally, the prompts regarding posture, force, and fatigue were all indicators for ergonomic risk factors. Points of concern for TLS centered around surgeons adopting unhealthy back and wrist positions throughout procedures. There is limited data on perceived tool usability based on biological sex for RALS in the literature. No significant differences were found between the results provided by male and female surgeons regarding console usability in the results presented here. This perceived ease of performing RALS regardless of biological sex supports the notion that robotics may improve the gender equity in the operating room.
This is a pivotal time to be investigating tool usability in relation to robotic surgery, as several new systems are currently entering the market with different approaches to controller design. Form, intuitive movement, and placement of various function controls all contribute to a system’s ease of use. The Senhance system from Asensus Surgical has scissor-like handles and a mirrored working direction to emulate the experience of performing TLS. In contrast, the manipulators of CMR’s Versius and Medtronic’s Hugo are closer in design to different game controllers and utilize in-line movements [ 24 ]. Anthropometry and user feedback regarding the functionality, form, and size of prototypes were considered in the design of the Versius controllers [ 25 ]. These three systems all have open or semi-open consoles [ 24 ]. Hand control and console design will change the way surgeons are positioned while operating and likely have a notable impact on comfort.
Survey length was a limitation of this survey. It was estimated to take 10−20 min to complete. While efforts were made to streamline the process using survey logic, the length may have been off-putting or prohibitive for some surgeons. This may have contributed to the decline in respondents observed throughout the questionnaire. Additionally, matrix tables may lead to missing data, despite their efficiency from a survey design point of view [ 26 , 27 ]. Additional limitations included selection and recall bias, which are present in many survey studies.
While the focus of this investigation was the prevalence of physical discomfort and pain related to the use of TLS and RALS tools, studies have raised the effects of mental load on the well-being of surgeons [ 28 ]. Non-technical surgical skills including situational awareness, decision-making, stress resilience, communication and leadership requirements add to the mental workload impacting surgeon performance. Future work could investigate both the physical and mental pressures on surgeons to expand the understanding of their workplace challenges.
In conclusion, the results presented here build on the existing picture emerging in the literature that RALS is easier and more comfortable for surgeons to perform than TLS. Up to one-third of TLS surgeons and 21% of RALS surgeons considered their symptoms to be moderate or severe, negatively impacting their operating performance. Methods to support or change wrist and back posture during TLS would greatly benefit surgeons. Opportunities also exist to improve upon the viewing angle and hand controls of RALS consoles to further benefit surgeon comfort. As new robotic systems enter the market, it is expected that their novel design features will positively impact surgical ergonomics. Continuously aiming to understand and improve upon laparoscopic tool design is an important pursuit to support surgeon well-being. | It is known that over half of previously surveyed surgeons performing Robot-Assisted Laparoscopic Surgery (RALS) and three-quarters of those performing Traditional Laparoscopic Surgery (TLS) experience intraoperative pain. This survey study aimed to expand upon the ongoing impact of that pain as well as perceived tool usability associated with TLS and RALS, for which considerably less documentation exists. A survey regarding the presence and impact, either immediate or ongoing, of intraoperative pain and Likert scale questions regarding tool usability was administered to TLS and RALS surgeons on the European Association for Endoscopic Surgery (EAES) mailing list. Prevalence statistics as well as trends based on biological sex and glove size were obtained from the 323 responses. Most respondents were right-handed European males (83−88%) with a medium glove size (55.8%). Moderate or severe shoulder symptoms were experienced by one-third of TLS surgeons. Twenty-one percent of RALS surgeons experienced neck symptoms that impacted their concentration. Small-handed surgeons experienced wrist symptoms significantly more frequently than large-handed surgeons, regardless of modality. RALS was associated with a significantly more optimal back and wrist posture compared to TLS. TLS surgeons reported increased ease with applying and moderating force while operating. These results suggest that intraoperative pain may be severe enough in many cases to interfere with surgeon concentration, negatively impacting patient care. Continuing to understand the relationship between tool usability and comfort is crucial in guaranteeing the health and well-being of both surgeons and patients.
Supplementary Information
The online version contains supplementary material available at 10.1007/s11701-023-01785-7.
Keywords
Open Access funding enabled and organized by CAUL and its Member Institutions | Supplementary Information
Below is the link to the electronic supplementary material. | Acknowledgements
The authors would like to acknowledge the generous support of Prof George Hanna, President of the European Association of Endoscopic Surgeons (EAES), for coordinating survey access to the members of the EAES.
Author contributions
All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by JH and OT. The first draft of the manuscript was written by JH and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Funding
Open Access funding enabled and organized by CAUL and its Member Institutions. CMR Surgical.
Data availability
All data generated or analyzed during this study are available from the corresponding author on reasonable request or on figshare at 10.6084/m9.figshare.24975273.
Declarations
Conflict of interest
This survey study was commissioned and funded by CMR Surgical Ltd. Authors J. Hislop, O. Tirosh, M. Isaksson, C. Hensman, and J. McCormick have no additional conflicts of interest or financial ties to disclose. | CC BY | no | 2024-01-15 23:41:52 | J Robot Surg. 2024 Jan 13; 18(1):15 | oa_package/68/8e/PMC10787683.tar.gz |
|
PMC10787684 | 0 | Introduction
Konrad Lorenz was fascinated by the behavioral repertoire and cognitive capacities of corvids. In his first scientific papers (Lorenz 1927 , 1931 ), he describes the richness and flexible use of social behaviors in tame, free-flying Jackdaws Corvus monedula , Magpies Pica pica and Common Ravens Corvus corax . One of his tame ravens also features prominently in his first popular book ‘King Solomon’s ring’ (translated from Lorenz 1949 ): Lorenz describes how this bird uses his name ‘Roah’ like a contact call for Lorenz, presumably to lure Lorenz away from danger. Throughout his papers and books, Lorenz kept on referring to ravens as an example of cognitive sophistication in birds.
Almost half a century later, Bernd Heinrich ( 1995 ) searched the scientific literature for what is known about raven cognition. In support of Lorenz’ view, he found > 1000 entries describing ravens as ‘smart’ birds. Yet, all but one of these reports were of anecdotal character, leaving much room for speculation about the underlying cognitive skills. In the last three decades, the picture has changed drastically: with elegant experimental designs researchers have unraveled sophisticated cognitive skills like inferential reasoning, perspective taking and future planning in ravens (Schloegl et al. 2009 ; Bugnyar et al. 2016 ; Kabadayi and Osvath 2017 ) and other corvids (e.g., Western Scrub Jays: Emery et al. 2007 ; Raby et al. 2007; New Caledonian Crows: Taylor et al. 2012 ; Boeckle et al. 2020 ). Hence, we begin to understand the cognitive building blocks that constitute corvid ‘intelligence’. What is still unclear, though, is why corvids have evolved such capacities. The aim of this paper is to take a first step towards answering the question of why ravens are smart by relating current hypotheses on brain evolution of recent empirical data on challenges faced in ravens’ daily life.
Out of several hypotheses concerning brain evolution, those related to foraging and those related to complexity of social life are particularly prominent. While the former emphasizes food distribution and/or accessibility as key factors (patchily distributed food, Milton 1981 ; extractive foraging, Parker and Gibson 1977 ), the latter considers dealing with conspecifics and maneuvering in a social network as key factors for driving cognitive evolution (social intelligence, Jolly 1966 ; Humphrey 1976 ). Note that in respect of social cognition, the focus can be on different aspects of social life like competition (Machiavellian Intelligence: Whiten and Byrne 1988 ), cooperation (Vygotskian Intelligence: Moll and Tomasello 2007 ), or information transmission (Cultural Intelligence: van Schaik and Burkart 2011 ). However, the common determinant of the variants of social intelligence is that predicting others’ behavior and intentions becomes increasingly difficult with variable social constellations (Whiten 2018 ). This leads to the assumption that the more complex social life becomes, the more individuals should invest in cognitive abilities that allow them to keep track of, and cope with, others. The problem with this intuitive assumption is that social structures in the animal kingdom are highly diverse and reflect different types of complexity (e.g., Freeberg et al. 2012 ; Rubenstein and Abbot 2017 ; Kappeler et al. 2019 ), which likely goes together with varying degrees of cognition. For instance, the caste-based hierarchies in eusocial species may impose different challenges and cognitive solutions than the hierarchies found in groups structured by social relationships. Indeed, it has been proposed that the essential conditions for social intelligence to emerge are those structured groups with individual-based recognition and the formation and maintenance of different types of relationships (e.g., Bergman and Beehner 2015; Kappeler 2019 ). In such groups, social complexity may entail (1) how many individuals interact on a regular basis (group size), (2) with whom individuals interact preferentially (social bonds), and (3) how often individuals meet and/or split up into temporary sub-groups (fission–fusion dynamics). While these measures of complexity typically refer to in-group members, the importance of out-group members or neighbors should not be underestimated and recently has received increased attention (Ashton et al. 2020 ).
Following this logic, I argue that to understand why ravens are smart, we need to understand their social life. However, at first glance, the social life of ravens is anything but complex: we can distinguish between two social classes, breeders and non-breeders. While breeders constitute male–female pairs that stay together for years and defend a territory for raising their offspring, non-breeders are mainly immature birds that are not restricted to a given location and tend to form loose groups at food and night roosts (Ratcliffe 1997 ). A similar picture can be found in many other corvids (Glutz von Boltzheim 1993 ). Hence, it has been argued that the social cognition of corvids is driven by challenges associated with long-term monogamous partnership rather than with conspecifics per se (relationship intelligence, Emery et al. 2007 ). Such an argument can be put forward not only for corvids but for monogamous species in general (Scheiber et al. 2008 ) and is supported by measures of relative brain sizes (Dunbar and Shultz 2007 ); it has received limited empirical testing on the behavioral side, though. While I acknowledge the idea of pair partners being key in understanding corvid cognition, possibly in contrast to neighbors/out-group members (compare Ashton et al. 2020 ), I further argue that the non-breeder state represents an additional source of social complexity. Indeed, early reports indicate some form of social structure in raven foraging groups (Coombes 1948 ; Huber 1991 ) and more recent studies described sub-groups composed of individuals with different foraging strategies (Dall and Wright 2009 ). Furthermore, group formation during foraging is not only a passive process, with individuals aggregating at resources of interest, but actively initiated via calls (Heinrich 1988 ). Note that ravens feed on highly unpredictable food sources like carcasses or kills and often face difficulties in accessing them due to competition with conspecifics and/or food defense by predators (Heinrich and Marzluff 1991 , 1995 ). Teaming up with others could be a solution to either of the problems.
In ravens, key aspects of social cognition hypotheses (competition, cooperation, information transmission) are thus intertwined with key aspects of foraging-related hypotheses (ephemeral occurrence, restricted access). Specifically in non-breeders, foraging is a social endeavor: as a team, they may become a challenge for breeding pairs (Marzluff and Heinrich 1991 ) and potential predators (Vucetich et al. 2004 ). However, raven foraging groups are anything but stable, with individuals coming and going (Heinrich 1989 ). While an ‘open-group’ character has long been taken as an argument against advanced social cognition, recent theories consider high degrees of fission–fusion dynamics as cognitively challenging (Aureli et al. 2008 ), with the premise that group members form and maintain social relationships. Hence, for applying ideas of social intelligence to ravens, we need to examine (1) whether individuals meet repeatedly (at same or different locations), (2) whether these groups are indeed structured by different relationships, and (3) whether birds build up any form of social knowledge. I and my research group have been working on these questions over the past decades, using a mix of behavioral and bioacoustical methods. Our prime focus has been on observational studies on wild ravens in the Northern Austrian Alps. These studies are complemented with behavioral and playback experiments under field and captive conditions. Our studies are based on the following assumptions: if ravens meet repeatedly at foraging sites, they may learn about others’ attributes, which fosters individual recognition and the formation of dyadic relationships. This way, raven groups get a structured character, despite individuals having a high degree of freedom in joining/leaving (sub-)groups at a particular site. Once a structure based on social relationships is formed, several features of social intelligence may emerge—as described for mammals like primates (e.g., Cheney and Seyfarth 1990 , 2007 ), social carnivores (e.g., Holekamp et al. 2007 ) or cetaceans (e.g., Connor 2007 ; Whitehead 2008 ).
Foraging patterns and group formation
To understand how often ravens meet under field conditions, we apply two complementary approaches. First, we use a sighting/re-sighting method of individually marked ravens at a given location: the area of the Cumberland Wildpark, Grünau im Almtal, where ravens regularly snatch food from zoo animals (Drack and Kotrschal 1995 ). Since the mid Nineties, ravens have been habituated to the presence of human observers at the main feeding spots, i.e., the enclosures of wild boars Sus scrofa , bears Ursos arctos and wolves Canis lupus . Since 2008, we have been monitoring their presence at these sites on an almost daily basis following a standardized protocol.
Summarizing the findings from the presence monitoring (Braun et al. 2012 ), we can say the following: first, the size of raven foraging groups in the park is variable between days and across seasons (Fig. 1 a). Abrupt changes in numbers for a few days (e.g., from 60 to 20 ravens to 60 ravens) point towards an opportunistic use of alternative food sources, such as carcasses or kills, when available (e.g., during hunting season in fall). Seasonal patterns (e.g., 10 + birds in summer, up to 100 birds in winter) may reflect changes in food distribution and/or accessibility across the year, e.g., because of the closing/opening of touristic areas or the pressure of territorial breeders in spring. Second, the composition of foraging groups in the park is relatively constant between days within a week, but changes across weeks with some individuals leaving and others joining (Fig. 1 b). This pattern fits well to the notion of ‘open’ groups with moderate to high dynamics described from other studies in Europe and the USA (e.g., Heinrich 1989 ; Huber 1991 ; Boarman et al. 2006 ). As in other studies, we also find a fairly even sex ratio in the groups and an age distribution skewed towards younger birds. However, we consistently see all age-classes represented (Braun et al. 2012 ; Boucherie et al. 2022 ). Hence, foraging groups are made up not only by immature birds (juveniles in their first year: 10–20%; subadults in their second and third year: 40–60%) but also adults (3 + years; 10–30%). On an individual level, we have collected presence data from about 650 birds. Around two-third of them have been tagged as young, so that we have a fair estimation of their age due to the coloration of the inner beak (Heinrich and Marzluff 1992 ). Birds in their first years have a high likelihood to disappear, indicating that they suffer a high mortality risk. On average, we observe 35% of the yearly offspring till adulthood. Note that young adults typically remain in the foraging groups until they are 5–8 years. Some adults even stay non-breeders their entire life (> 10 years); others come back when they have lost their partner and/or territory. These long periods spent as non-breeders differ from those reported from other studies (review in Glutz von Boltzheim 1993 ; Webb et al. 2012 ). Possibly, our findings reflect the situation of a satiated population, where most territories are occupied and adults queue for suitable breeding opportunities, as has been described also for other territorial breeders (Ens et al. 1992 ; Penteriani and Delgado 2011 ). Finally, it is worth mentioning that not all ravens exploit our site on a regular basis: about one-third of them show up only from time to time and do not stay very long; another third pass by more regularly, every now and then, and stay for several weeks; the final third show a preference for using our site and can be observed (almost) every day over years (some individuals for > 10 years). According to the literature (e.g., Ratcliffe 1997 ; Heinrich et al. 1994 ; Webb et al. 2012 ), we would expect birds from the first two-thirds to be vagrant non-breeders, whereas the birds from the last third to be local breeders. However, we can find a proportion of 10–15% (confirmed) breeders in all three units; hence, the majority of vagrant and local ravens in our foraging groups are non-breeders.
To better understand where ravens go when they leave our study site, we implemented a second monitoring method that allows us to track tagged individuals over long distances (Fig. 1 c). Since 2013, we have employed a subset of about 150 birds with solar-powered GPS loggers. Results show that ravens tagged at our site are recorded in the nearby region (Salzkammergut) but also in a range of several 100 kms, from Germany and Czech Republic to Italy and Slovenia (Loretto et al. 2016 , 2017 ). Within that range, they tend to gather at specific sites for foraging. Note that the food at those sites is predominantly of anthropogenic origin like the feeding of animals at wild/game parks and farms or the ‘leftovers’ at garbage dumps, compost sites, skiing huts etc. (Jain et al. 2022 ). Our study site at the Cumberland Wildpark thus reflects a typical foraging location for ravens in Middle Europe. Unlike kills and carcasses, anthropogenic food sources are regularly ‘re-filled’ and thus highly predictable in space and time. Still, there are times of food ‘delivery’ (e.g., animal feedings, dumping of garbage) and/or better times of accessibility (e.g., when workers/tourists are leaving), which might explain why we typically observe ravens foraging there in groups rather than by themselves. Like with naturalistic food sources, group formation at anthropogenic sources is often accompanied by food-associated calls (Bugnyar et al. 2001 ), which confirms that ravens actively seek the company of others (Heinrich 1988 ).
Although most anthropogenic food sources are ‘re-filled’ on a daily basis, we see a huge variation between individuals in how often and how long ravens use them (Fig. 1 b, c). Part of the variation can be explained by ecological factors like differences in food availability across seasons (Jain et al. 2022 ). However, a large part seems to be due to individual preferences: some ravens consistently exploit one or few sources, staying at given sites over years; others exploit a variety of sources, frequently changing between sites and thereby covering a large home range area (Loretto et al. 2016 , 2017 ). Thus, the foraging groups of ravens are composed of individuals with different degrees of fission–fusion dynamics: birds with a low degree are ‘local’ to an area (compare the results from the presence data collected at the Cumberland Wildpark); birds with moderate to high degrees of fission–fusion dynamics tend to visit some or numerous sites, where they are exposed to other local ravens but also to other vagrants with moderate and high dynamics. From the perspective of locals, these vagrant ravens may be regular or irregular visitors who show up from time to time. From the perspective of vagrants, there are locals that they meet at a given location and fellow vagrants that they meet at various locations. In either case, one of the key assumptions of social intelligence is met: ravens meet repeatedly and experience a significant degree of variability and complexity.
Group structure: dominance and bonds
To investigate whether raven foraging groups are structured by social relationships, we observe the social interactions of individuals during and outside feeding. Depending on the study, we apply focal sampling (5 min per bird) or behavioral sampling on an adlib basis per time unit (30 min). With either protocol, we can determine dominance relationships from agonistic interactions (like threats, forced retreats, fights and chases) and social bonds from affiliative interactions (like allo-preening, touching/holding body parts, and contact sitting). We face the constraint, though, that not all wild birds can be individually identified as only a proportion have been caught and marked. Hence, under field conditions, our sample is biased towards interactions between marked birds. We thus complement our studies with observations from captivity, where we can identify all individuals and calculate social networks among group members.
In captivity, ravens tend to have conflicts with several group members, whereas they engage in affiliative behaviors with only a subset of individuals (typically 1–3, sometimes up to 7; note that our captive groups consist largely of immatures and are limited to 8–15 birds). Hence, their agonistic networks are larger and more dense than the affiliative networks (Kulhaci et al. 2016 ), which fits the pattern of many avian and mammalian species forming structured groups (e.g., Croft et al. 2008 ). If we calculate the mean (± SD) number of interaction partners between marked birds under field conditions per year, we see a similar picture: wild ravens engage in agonistic interactions with 8–12 birds (females: 8.4 ± 2.2, range 0–49; males: 11.6 ± 3.5, range 0–70) and affiliative interactions with 1–2 birds (females: 1.4 ± 1.3, range 0–9; males: 1.6 ± 1.2, range 0–8). The small number of affiliation partners supports the notion that ravens focus on a few individuals per time, even when the number of potential partners is not restricted. Affiliative interactions can be exchanged between birds of same and different sex as well within and across age-classes (Braun et al. 2012 ; compare Boucherie et al. 2020 for captivity). However, affiliative relationships are typically composed of male–female dyads, whereby the identity of the affiliation partners may change between seasons and/or years (Braun et al. 2012 ). This finding is corroborated by observations at other sites (see Glutz von Bolzheim 1993 ) and, from a functional point, supports the view of non-breeders testing potential long-term partners.
In our captive groups, ravens consistently form a dominance hierarchy (Boucherie et al. 2022 ), which has been reported also from studies, in which ravens were temporarily kept in free-flight (Gwinner 1964 ) or wild ravens were temporarily restrained to an aviary (Marzluff and Heinrich 1991 ). Furthermore, there have been speculations that wild ravens may form dominance rank hierarchies at commonly used anthropogenic foraging sites (Huber 1991 ). Indeed, our recent analysis of raven foraging groups at our study site reveals a clear steep dominance rank hierarchy (Boucherie et al. 2022 ). Note that about 100 marked ravens were involved in each of the two data sets of this field study (2008–10; 2017–19), representing about half of the population using the park at those times. Assuming that interactions with unmarked birds follow the same pattern as with marked birds, this suggests that our ravens can deal with the dominance status of at least 200 individuals. Recall that not all of the ravens are present in the foraging groups at our study site all the time; in fact, two-thirds of them show medium to high degrees of fission–fusion dynamics and pass by only occasionally. As a whole, the evidence suggests that ravens have a good memory of individuals and their rank, which is reinforced during facultative encounters. Alternatively, they might use observable cues (like body size) or behavioral expressions (like self-aggrandizing displays (Lorenz 1940 ; Gwinner 1964 ) that are related to sex, age and/or bonding status) to judge the dominance status of others (Heinrich 1989 ).
Under captive and field conditions, males tend to outrank females, older birds tend to outrank younger ones and bonded birds tend to outrank non-bonded birds (Braun et al. 2012 ; Boucherie et al. 2022 ). These effects indicate that ravens achieve their rank in the dominance hierarchy primarily due to their competitive abilities such as physical strength (males are larger) and fighting experience (older birds have an advantage) and—to some extent—also due to (repeated) social support by other ravens. Social support refers to a bystander intervening in a conflict, either by actively helping (attacking one of the combatants) or passively by its mere presence in a conflict situation (e.g., one of the combatants is backing off when the supporter approaches; Fig. 2 ). Providing active support has been considered fundamental to social intelligence, especially the Machiavellian version (Whiten and Byrne 1988 ). In ravens, active social support occurs in about 5–10% of agonistic interactions and is used selectively, depending on the individual characteristics of the birds involved in the conflict and/or the relationship of the supporter to one of the combatants. For instance, active aggressor support is typical for younger males, who use conflicts of dominants to challenge individuals higher than themselves. Victim support can result as a byproduct from dominants attacking young aggressors (likely as a response to them challenging the hierarchy) or directly from individuals that mutually support each other. Note that aggressor support is shown about twice as much as victim support, likely because the latter poses a higher risk of injury to the supporter, particularly when conflicts are severe (fights, chases). Yet, getting help as victim likely changes the outcome of the conflict (Szipl et al. 2017 ) and, if provided repeatedly, may lead to a rank dependent on the supporter’s help. Dependent ranks have already been described by Lorenz ( 1931 ), ( 1988 ) and appear to be common among pair partners in long-term monogamous species like many birds (e.g., Scheiber et al. 2005 ) and in kin-structured groups (e.g., matrilines in old-world primates, social carnivores, cetaceans; Cheney and Seyfarth 1990 , 2007 ; Holekamp et al. 2007 ; Whitehead 2008 ). Long-term collaborations in conflicts are also referred to as alliances, which contrast with facultative short-term coalitions (de Waal and Harcourt 1992 ). Applying this terminology, we see both strategies in raven foraging groups: facultative coalitions and long-term alliances. Note that the latter occurs not only between breeding pairs, but also between non-breeders with high-quality relationships, i.e., social bonds.
Following the concept developed in primatology (Hinde 1976 ), we refer to social bonds in ravens when individuals exchange affiliative behaviors reciprocally over time. Recall that such an exchange is not restricted to adults but can be seen in birds of all age-classes (Braun et al. 2012 ), whereby siblings tend to be preferred partners in young ravens (Kulhaci et al. 2016 ). The amount and equity of providing affiliative services may differ substantially between bonding partners and likely reflects the value of the dyads’ relationship (Fraser and Bugnyar 2010a ). In our wild population, for instance, females tend to provide substantially more allo-preening than their male counterparts during bond formation. At the same time, they start winning conflicts during foraging, suggesting that they immediately profit from the presence of a (potential) bonding partner. Males also profit from bonding by winning more conflicts, but not before they consistently reciprocate affiliative services to their partner (Braun et al. 2012 ). In captivity, bonded ravens tend to reconcile their conflicts (Fraser and Bugnyar 2011 ) and consistently provide bystander affiliation to each other post-conflict (Fraser and Bugnyar 2010b ). These patterns fit well to the conflict management described in social mammals, particularly primates (Aureli and de Waal 2000 ), and indicate that ravens might be interested in repairing their relationships when damaged by conflicts, and in alleviating stress of their bonding partners caused by conflicts with others. Post-conflict behaviors have also been described for other corvids in captivity (e.g., Seed et al. 2007 ; Sima et al. ( 2018 ); Logan et al. 2013 ) but we do not have experimental evidence for the use of those strategies under field conditions yet (Lee et al. 2019 ). However, we know from primate studies that post-conflict bystander affiliation can serve different functions and may differ substantially between species and even within (captive) populations of the same species (Koski and Sterck 2007 ; Fraser et al. 2008 ). Still, the fact that wild and captive ravens support each other during and (possibly) after conflicts when bonded, provides a strong case that forming and maintaining relationships are of immediate value to them in daily social life. In this respect it is surprising is that not all birds have bonding partners; in fact, about half of the ravens in our foraging groups have no partners at all over the time course of a year and sometimes over years.
Social knowledge and cognition
To investigate whether ravens recognize and memorize individuals and their social relationships, we make use of their elaborate acoustical communication via playback experiments, inspired by the seminal work of Cheney and Seyfarth ( 1990 , 2007 ) on primates. We focus on a subset of calls that can be linked to a specific environmental context (like the presence of food and predators) or social context (like self-advertisement or appeasement) and which typically contains information about the sender’s identity, sex and age class (Boeckle et al. 2012 , 2018 ). Furthermore, we apply behavioral observations to examine tactics that imply the use of social knowledge.
If we first focus on captive ravens, where we have full control over the exposure to conspecifics (whom they have or have not met, and thus can or cannot know), we have shown that they remember former group members and their relationship valence over years (Boeckle and Bugnyar 2012 ). Specifically, adult pair-housed ravens respond stronger to the playback of contact calls (‘rab’) of former group members (individuals they were kept with as non-breeders) as compared to unfamiliar birds matched for sex and age, and they modulate their call response to familiar birds, depending on whether they were former ‘friends’ or ‘foes’ (birds they had a close affiliative relationship with or not). These results are in line with our hypothesis that ravens build up social knowledge about group members in the non-breeder state. That they can retain this information for years fits to the moderate to high fission–fusion dynamics ravens experience under field conditions.
Our field studies corroborate that ravens discriminate between social categories, and possibly individuals, in daily life situations. For instance, foraging ravens vary strongly in calling at food (‘haa’), which can be explained by social factors like the presence/absence of territory holders (Marzluff and Heinrich 1991 ) but also by individual characteristics like the birds’ age, sex and vagrancy status (Boeckle et al. 2018 ; Szipl et al. 2015 ). Using a paired playback design, we have shown that ravens foraging in the Cumberland Wildpark preferably approach loudspeakers broadcasting female callers, but only if those are local birds, i.e., familiar to them (Szipl et al. 2015 ). On the production side, we note that adult females call at food when they are all by themselves at the foraging site (Sierro et al. 2020 ), suggesting that they address their bonding partners (i.e., want them to come).
A key question in respect of social knowledge is whether individuals represent not only their own relationships but also the relationships between other group members (third parties; Cheney and Seyfarth 1986 ; Tomasello and Call 1997 ). Indeed, when we play back a simulated conflict between two familiar individuals to subadult ravens in captivity, they respond stronger to playbacks in which the outcome reflects a violation of the hierarchy as compared to outcomes that are in line with the existing hierarchy (Massen et al. 2014a ). Interestingly, ravens respond to such simulated rank reversals not only when those concern members of their own group but also members of the adjacently kept group. These findings clearly show that ravens can represent the rank relationship between other individuals, and they can possibly do so by mere observation.
In our playback experiment, we make use of the fact that ravens utter specific calls when they are challenged by a dominant (Gwinner 1964 ). These calls may primarily serve to appease the aggressor but also alert and/or attract nearby ravens (Heinrich et al. 1993 ). As with food calls, appeasement calls vary strongly between individuals and context. Given their functions, we can expect victims of aggression to call more when the conflicts are severe (to appease the aggressor) and/or when there are potential allies in the audience (to seek help). Indeed, we see that victims modulate their calling rate according to the audience composition: they increase calling when close kin are present but decrease calling when their aggressors’ bonding partners are present (Szipl et al. 2018 ). The former indicates that (young) ravens take into account their own relationships when calling for help; the latter indicates that they also take into account the relationship between others, as they seemingly try not to alert the aggressor’s partner. Hence, the context of social support seems promising to probe for third-party understanding in wild ravens. Note that ravens intervene not only in others’ conflicts but also in others’ affiliative interactions (Massen et al. 2014b ), whereby they selectively target individuals that are about to form bonds (i.e., start reciprocating affiliative behaviors). Recall that bonded birds provide both active and passive support, leading to a higher probability of winning conflicts and eventually a rise in rank (Braun et al. 2012 ). We thus interpret the selective interventions in early stages of bonding as attempts to prevent those birds from becoming alliance partners and thus possible competitors. This would mean that ravens not only come to understand others’ relationships but also try to prevent some to form. Such tactical moves have been first reported for chimpanzees and referred to as ‘politics’ (de Waal 1982 ). | Communicated by F. Bairlein.
Ravens and other corvids are renowned for their ‘intelligence’. For long, this reputation has been based primarily on anecdotes but in the last decades experimental evidence for impressive cognitive skills has accumulated within and across species. While we begin to understand the building blocks of corvid cognition, the question remains why these birds have evolved such skills. Focusing on Northern Ravens Corvus corax , I here try to tackle this question by relating current hypotheses on brain evolution to recent empirical data on challenges faced in the birds’ daily life. Results show that foraging ravens meet several assumptions for applying social intelligence: (1) they meet repeatedly at foraging sites, albeit individuals have different site preferences and vary in grouping dynamics; (1) foraging groups are structured by dominance rank hierarchies and social bonds; (3) individual ravens memorize former group members and their relationship valence over years, deduce third-party relationships and use their social knowledge in daily life by supporting others in conflicts and intervening in others’ affiliations. Hence, ravens’ socio-cognitive skills may be strongly shaped by the ‘complex’ social environment experienced as non-breeders.
Raben und ihre Verwandtschaft gelten gemeinhin als „kluge“ Vögel. Lange beruhte ihr Ruf weitgehend auf Anekdoten, erst in den letzten Jahrzehnten wurden ihre beeindruckenden kognitiven Fähigkeiten auch in Verhaltensexperimenten nachgewiesen. Während wir immer besser die kognitiven Bausteine von „Raben-Intelligenz“ verstehen, bleibt die Frage offen, warum diese Vögel derartig leistungsstarke Gehirne entwickelt haben. In diesem Artikel gehe ich dieser Frage an Kolkraben Corvus corax nach, indem ich gängige Hypothesen zur Evolution von Intelligenz mit rezenten empirischen Daten über Anforderungen im täglichen Leben von Raben in Beziehung setze. Die Ergebnisse zeigen, dass Gruppen nahrungssuchender Raben sämtlichen Annahmen der Sozialen Intelligenz Hypothese entsprechen: Individuen treffen einander widerholt an Nahrungsplätzen, wobei sie unterschiedliche Präferenzen für bestimmte Plätze zeigen und somit in der Gruppendynamik variieren; die Gruppen zeichnen sich durch Dominanzstrukturen und soziale Beziehungen aus; einzelne Raben erinnern sich an die Wertigkeit ihrer Beziehungen über Jahre, erkennen Beziehungen anderer und nützen ihr soziales Wissen im täglichen Leben, indem sie in Konflikte anderer eingreifen und sich in die Beziehungen anderer einmischen. Die sozio-kognitiven Fähigkeiten von Raben könnten daher stark durch die „komplexe“ soziale Umwelt geformt werden, die sie als Nichtbrüter erleben.
Keywords
Open access funding provided by Austrian Science Fund (FWF). | Summary and outlook
Taken together, our studies reveal that (1) ravens meet repeatedly at foraging sites, either at the same location or at different locations; (2) foraging groups are composed of individuals with different site preferences and thus degrees of fission–fusion dynamics; nevertheless, the groups are structured by dominance rank hierarchies and social bonds; (3) ravens memorize former group members and their relationship valence over years, deduce third-party relationships and use their social knowledge in daily life by supporting others in conflicts and intervening in others’ affiliations. Hence, ravens meet our assumptions concerning social foraging and intelligence.
Before drawing conclusions, let me try to put each of the key results into context: given that our findings come from ravens that almost exclusively feed on food sources of anthropogenic origin, we need to be open about the possibility that the observed patterns could be a recent development with limited implications from an evolutionary perspective. Wild parks or skiing huts, for instance, have not been operating much longer than 50 years. Yet, ravens are scavengers and, as such, prone to utilize food made accessible by other species (Stahler et al. 2002 ; Vucetich et al. 2004 ), humans being no exception (Heinrich 1989 ). In fact, there is a long history of ravens exploiting resources provided by humans over hundreds and possibly thousands of years (Marzluff and Angel 2005 ; Baumann et al. 2023 ). Hence, ravens in Middle Europe might have simply adjusted to the type of resources offered in today’s landscape but their regular meetings at foraging sites could possibly reflect a species-general feature typical for their scavenging lifestyle. In support of this idea, a recent project in Yellowstone National Park shows that also under ‘naturalistic’ conditions (with limited impact by humans), ravens rely to a great extent on human subsidies, forming groups at anthropogenic food sources especially during winter (Ho et al. 2023 ). We also see similar patterns of group formation and composition in carrion and hooded crows Corvus corone and C. cornix foraging in Zoo Vienna (Uhl et al. 2019 ). Hence, our findings may apply generally to corvids with a similar foraging ecology and social structure than common ravens.
Raven groups at anthropogenic food sources can be interpreted as aggregations, with birds ending up using the same food source independently from each other. However, if raven groups were only aggregations at foraging sites, we would not expect them to signal their motivation to feed via specific calls, nor would we expect them to display a dominance rank hierarchy in competition for food. Yet, our ravens do use ‘haa’ calls before they start foraging at the enclosures of zoo animals, which indicates that, like at carcasses, individuals actively coordinate for approaching food (Heinrich 1988 ). During feeding, ravens repeatedly get into conflicts with each other, whereby they show a clear dominance rank hierarchy despite regular changes in the group composition. Forming and keeping track of dominance relationships thus works under conditions of moderate (to high) fission–fusion dynamics. From a cognitive point of view, this fits well to the fact that several corvid species tested on transitive inference tasks in the lab are capable of predicting rank relationships (Bond et al. 2003 ; Lazareva et al. 2004 ; Paz-y-Mino et al. 2004 ; Mikolasch et al. 2013 ). These results are in line with those from species as diverse as wasps Polistes sp. (Tibbets et al. 2019 ), fish Astatotilapia burtoni (Grosenick et al. 2007 ), geese Anser anser (Weiß et al. 2010 ) and primates Macaca mulatta (Gazes et al. 2012 ); together they support the conclusion that transitive inference is one of the cognitive building blocks that emerge when animals live in social groups structured by dominance ranks (MacLean et al. 2008; Fernald 2014 ; Doi and Nakamura 2023 ).
At first glance, it may be of little surprise that raven foraging groups are also structured by social bonds, given that they form long-term monogamous breeding pairs (compare Emery et al. 2007 ). However, raven foraging groups consist to a large extent of immature birds (that per definition do not form breeding pairs) and adults without a territory (that do not have the opportunity to breed). Yet, they form social bonds that are hardly distinguishable from pair bonds of territorial breeders, except that they appear to be less stable over time. The importance of ‘personal friendship’ was also noted by Lorenz ( 1931 ), as his tame ravens treated human interventions according to context. Under field conditions, the social bonds of (non-breeding) ravens seem to function as alliances in conflicts (compare de Waal and Harcourt 1992 ). Possibly, the social support comes as a by-effect of bonding, as it has been described also for other long-term monogamous species (e.g., Black and Owen 1986 ; Scheiber et al. 2005 ; Morales et al. 2022). However, ravens provide a decent amount of support also to non-bonded individuals, either by helping aggressors in beat-ups or by challenging aggressors before they could attack another bird. Such temporary coalitions speak for a tactical use of third-party interventions (Whiten and Byrne 1988 ), whereas the reciprocal support in alliances may be based primarily on emotions (Schino and Aureli 2021 ). As with most other species, the cognitive underpinnings of both types of social support are speculative and would need to be investigated experimentally. The same is true for post-conflict management, which seemingly emerge with the importance of social bonds across a variety of species (Fraser et al. 2009 ) and may, but does not have to, be based on sophisticated cognitive mechanisms (Cordoni et al. 2023 ).
Given the composition and dynamics of foraging groups, we can argue that ravens do face a ‘complex’ social life. According to the social intelligence hypothesis, we may thus expect them to build up social knowledge about group members, which is in line with the results from our playback experiments under captive and field conditions. Memorizing individuals and their relationships over years fits well to what is known from other social animals (review on individual recognition: Yorzinski 2017 ; social memory: e.g., McComb et al. 2000 ; Bruck 2013 ). Possibly, long-term memory for group members is the rule rather than an exception in species living in structured social groups. This said, it is often unclear if the animals’ memories are based truly on individual recognition or rather on refined class-level recognition (Tibbets and Dale 2007 ). For instance, the ravens in our experiment might have remembered social categories (in-group vs. out-group members; affiliates vs. non-affiliates). However, we know from a study by Kondo et al. ( 2012 ) that large-billed crows Corvus macrorhynchos match the visual image and acoustical call of group members, but not of unfamiliar individuals, in a cross-modal design. Hence, we have experimental evidence for individual recognition based on mental representation in a closely related corvid species. Moreover, our simulated rank reversal experiment would not have worked, if the ravens would not be capable of recognizing specific individuals and their rank relationships. Such a third-party understanding is considered as an important building block for advanced social cognition (Tomasello and Call 1997 ), as it allows a high flexibility in social maneuvers. However, third-party understanding also leads to a high information load and it is still debated how well it is expressed in different species (see Bergman 2010 within primates; Lee et al. 2019 within corvids). The selectivity in requests for social support by victims of aggression suggest that wild ravens are capable of tracking the affiliation status of other ravens. Together with the results from playback experiments on simulated rank reversals, we may conclude that ravens can track others’ dominance and affiliation status. The selective interventions in affiliative interactions by bonded birds indicate that ravens may even go a step further and attempt to manipulate the formation of bonds in other ravens, which may be referred to as ‘politics’ (de Waal 1982 ). Aside ravens and some primates (Mielke et al. 2017 ), interventions in affiliative behaviors have been reported also from domestic horses Equus caballus (Schneider and Krueger 2012 ), but the strategic character of those maneuvers is debated. Again, experiments would be needed to test for the cognition underlying those tactics.
In conclusion, socially foraging ravens fulfill several criteria for applying social intelligence (sensu Whiten and Byrne 1988 ). They do show sophisticated behaviors and cognitive skills in the social domain that are comparable to those reported from other socially complex species, notably primates. Although our findings support the idea of convergent evolution of socio-cognitive traits in distantly related taxa (Emery et al. 2004 ), we still need to test for the cognitive mechanisms underlying (some of) these traits in either of the taxonomic groups. As a final point, I would like to highlight the enormous variation we see among individuals in how they cope with (the same) challenging situations in everyday life. Understanding the causes and consequences of this variation (e.g., nutritional/social/developmental stress: Nowicki et al. 2002 ; Sachser et al. 2011 ; Boogert et al. 2014 ; social competence: Taborsky and Oliveira 2012 ) would be an important next step towards an integrative view of raven social cognition, much in the sense of Tinbergen ( 1963 ). | Acknowledgements
I am grateful to all members of the Bugnyar lab in the last 20 years, the keepers and staff at the research stations, and my colleagues at the Department BeCogBio. I acknowledge financial support by the FWF (J2064, J2225, R31, I105, Y366, P29705, P3390, W1234, W1262), WWTF (CS11-008), the Faculty of Life Science, the Austrian Ministry of Science and Education and the Verein der Förderer KLF, and logistical support from the Herzog von Cumberland Stiftung, the Cumberland Wildpark Grünau, private raven owners and the Tiergarten Schönbrunn.
Funding
Open access funding provided by Austrian Science Fund (FWF). | CC BY | no | 2024-01-15 23:41:52 | J Ornithol. 2024 Oct 4; 165(1):15-26 | oa_package/d9/57/PMC10787684.tar.gz |
||||
PMC10787685 | 38217719 | Introduction
Renal stones are one of the most common urologic diseases globally, with lower pole calculi accounting for almost 25% to 35% [ 1 ]. The best choice for lower pole stones (LPS) remains controversial, with several scholars recommending active surveillance [ 2 , 3 ]. However, the Wisconsin Stone Quality of Life Questionnaire (WisQOL) score for most patients with LPS was poor [ 4 ] as patients were worried about the increase of the stone, as well as the renal colic caused by possible stone migration. The European Association of Urology (EAU) guidelines recommend Extracorporeal Shock Wave Lithotripsy (ESWL) as the first-line treatment for LPS < 10 mm, while Percutaneous Nephrolithotripsy (PCNL) is preferred for LPS > 20 mm [ 5 ]. For LPS of 10–20 mm, ESWL is also recommended, but the reported stone-free rate (SFR) is poor [ 6 , 7 ]. PCNL could achieve excellent SFR but is limited by a greater risk of hemorrhage.
With improved technology, flexible ureteroscopy has become an attractive alternative for many urologists in the management of 10–20 mm LPS. Some researchers have recommended FURS for LPS due to its high success and low complication rates [ 1 , 8 – 10 ]. However, the sharp infundibular–pelvic angle (IPA) resulted in the decreased SFR of FURS in LPS compared to upper or middle calyx stones. Therefore, several surgeons recommend relocating the LPS to a favorable calyx before lithotripsy to improve the SFR [ 1 , 8 , 11 ]. However, Shrestha et al. reported relocation of LPS followed by laser lithotripsy achieved similar SFR as in situ lithotripsy [ 12 ].
The current study compared the efficacy and safety of relocation during FURS with in situ lithotripsy for the treatment of 10–20 mm LPS. It was hypothesized that relocating LPS during FRUS could improve the SFR. | Methods
A prospective randomized trial of 90 patients who underwent FURS with holmium laser lithotripsy for 10–20 mm LPS in our center was conducted from November 2020 to November 2022. The study protocol was approved by the local Institutional Review Board (Approval Number (KY2020-068-01)). All patients provided written informed consent before the operation. Necessary preoperative diagnostic procedures (medical history, serum creatinine and electrolytes, urine tests, sterile urine culture, 24-h urine electrolytes, and parathyroid hormone) were performed. Renal stones and kidney characteristics were assessed through low-dose abdominal none contrast computed tomography (NCCT) and plain X-rays of kidney–ureter–bladder (KUB). Patients with X-ray negative LPS, upper/middle pole stones, hyperparathyroidism, ureteral stricture, calyceal diverticular stones, medullary sponge kidney, and renal abnormalities (such as pelvic kidney or horseshoe kidney) were excluded. Data analysis included the patients’ demographics, stone characteristics, surgical details, perioperative outcomes, and SFR.
FURS technique
The whole FURS procedure was performed by Doctor Xu (Doctor of Medicine, associate chief physician of the Urology Department) using an 8.4-Fr (Olympus, URF-V2, Japan) flexible ureteroscope. In our study, a double-J stent was routinely inserted into the target ureter under local anesthesia 2 weeks before FURS. Patients were placed in the dorsal lithotomy position under general anesthesia, and intravenous antibiotic according to the sterile urine culture was administered 30 min preoperatively in all cases (with negative urine culture, empirical use of antibiotics would be applied). Ureteroscopy was routinely performed using a semi-rigid ureteroscope (8F-9.8F, Richard Wolf GmbH, Knittlingen, Germany) before FURS in all patients, so that the preoperative double-J stent could be removed and two guide wires [a safe guide wire (HiWire, Cook Medical) and an operating guide wire (Sensor, Boston Science)] could be placed into the target renal pelvis. Thus, a hydrophilic-coated ureteral access sheath (Cook Medical, 12-14-Fr, Ireland) was inserted alongside the sensor wire under direct vision. LPS were treated using a 200 μm holmium laser fiber with a pulse energy of 0.8–1.0 J and pulse frequency of 10–30 Hz based on the volume and hardness of the stones. LPS were smashed directly in the in situ group. When LPS in the in situ group could not be managed by laser fiber, it was defined as lithotripsy failure and the patient was placed in the relocation group if the LPS could be displaced by a stone basket. In the relocation group, the LPS were removed to a favorable renal calyx (upper/middle poles) through a stone basket (TIPLESS Stone-extractor Nitinol, Reusable handle) before lithotripsy. Large-sized LPS that could not be engaged in the basket were first fragmented, and then, the fragments were relocated to the desired calyx for further lithotripsy. Basket retrieval was used for stones larger than 2 mm, while fragments ≤ 2 mm were left for spontaneous passage. A 4.7-Fr double-J stent was left in place in all cases. Stones were collected postoperatively to analyze the composition through Infrared Spectroscopy. Patients with LPS that could not be touched even when using the stone basket were excluded from this study and treated later with other appropriate treatments.
Follow-up and statistical analysis
All patients underwent KUB radiography 1 day postoperatively to evaluate the primary SFR. The double-J stent was removed 2 weeks later by cystoscopy in outpatients. All patients obtained a KUB as well as a B-ultrasound 3 months after the first treatment. A subsequent procedure or other treatment modalities were chosen to help extract stones when faced with poor SFR. Stone-free status was defined as the absence of any stones in the KUB radiography and stone fragments ≤ 2 mm in B-ultrasound. Complications were graded according to the Clavien–Dindo classification system [ 13 ].
Statistical analysis was performed through SPSS statistical software version 27.0, with the data expressed as mean ± standard deviation (SD). The data were analyzed using the Chi-square test for categorical variables and the independent-sample T test for continuous variables. A P value < 0.05 was considered statistically significant. | Results
A total of 90 patients with 10–20 mm LPS were enrolled in this study and randomized to two groups. The patients’, demographics and stone characteristics are outlined in Table 1 , showing no significant difference between the two groups in terms of age, BMI, gender, diabetes, hypertension, and urine culture. Additionally, stone characteristics were also similar in terms of stone number, laterality, cumulative burden, composition, and density. The average diameter of the cumulative stone burden in the relocation and in situ groups was 13.2 mm and 13.7 mm, retrospectively.
The perioperative parameters and outcomes of both groups are presented in Table 2 . The average American Society of Anesthesiologists classification score was similar, with no significant differences in perioperative parameters including the operation time, total energy consumption, postoperative duration, and overall cost. Of note, the operative time for 10–15 mm LPS in the relocation group was shorter than in the in situ group, however, this was the opposite for the LPS of 15–20 mm, although not statistically significant (26.43 ± 11.13 vs 34.0 ± 16.34, p = 0.061, 59.41 ± 23.84 vs 47.63 ± 15.13, p = 0.065, respectively). The primary SFR in the relocation group and in situ group was 82.2% and 66.7%, respectively ( p = 0.091). However, there was a significant difference three months later, as the SFR in the relocation group improved greatly compared to the in situ group (97.8% vs 84.4%, p = 0.026). Meanwhile, the WisQol score in the relocation group was higher than that of in situ group, with an excellent SFR (126.98 ± 10.13 vs 110.18 ± 23.95, p < 0.001).
Three patients in the in situ group were subsequently placed in the relocation group as the LPS could not be reached by the laser fiber, necessitating the use of the stone basket to relocate the LPS. The procedure was successful and we believed that this measure could effectively prevent the patients from accepting a second-stage procedure. The SFR of these three patients in situ group was defined as unclear.
According to the Clavien classification system, no serious intraoperative complications including minimal ureteral perforation occurred. Postoperative fever was the most common complication in both groups, with four patients administered indomethacin and recovering (Clavien grade I). Postoperative fever was found in two patients (1 in each group) who required antibiotics according to the urine or blood cultures (Clavien grade II). Two patients in the relocation group suffered a double-J-related fever, and thus, the double-J was immediately removed by cystoscopy under local anesthesia (Clavien grade IIIa). A new double-J stent was inserted under general anesthesia in one patient in the relocation group because of the migration of the double-J stent (Clavien grade IIIa). | Discussion
The best treatment for 10–20 mm LPS remains controversial. The EAU guideline on urolithiasis recommends ESWL or endoscopy as the first-line treatment [ 5 ]. However, the SFR of ESWL is affected by many factors, such as the patient’s BMI and stone density. The reported success rate of ESWL for 10–20 mm LPS is 23% to 85% [ 6 , 11 , 14 ]. Recent data have proved that FURS could be an optimal approach for LPS [ 1 , 8 – 11 ]. It is performed through the natural orifice, making it less invasive and safer than PCNL. The difficulty during FURS is mainly because of the narrow IPA and long lower pole infundibula [ 15 ]. The final SFR of LPS decreased, with the reduction life of the flexible ureteroscope due to intraoperative overbending. Several innovations have been applied to improve the SFR, including the ergonomics of the flexible ureteroscope, the use of access sheath with vacuum aspiration, changes in operation position, thinner laser fiber, and a stone basket [ 10 , 16 , 17 ]. In 2000, Kourambas et al. published their experience that relocating the LPS to a favorable calyx during FURS could improve the SFR but not significantly [ 18 ]. Subsequently, Schuster et al. reported that relocating the LPS could significantly improve the SFR [ 11 ]. Furthermore, Yaghoubian et al. shared a similar experience [ 8 ]. However, another study showed that relocating LPS during RIRS did not significantly improve SFR compared to in situ lithotripsy [ 12 ]. Therefore, we performed this prospective study to assess whether relocating LPS during FURS could improve the SFR.
In the present study, there was a significant difference in SFR between the relocation and in situ groups 3 months postoperatively. Three patients in the in situ group were reallocated to the relocation group as the stones could not be touched by the holmium laser. The stones were then relocated to the upper calyx through a stone basket, followed by a simple lithotripsy. Fragments in the upper or middle calyx may be easier to excrete but may move to other invisible calyxes during lithotripsy in situ. Additionally, certain activities such as jumping and handstands are quite difficult for elderly or obese patients, thereby increasing the risk of stone fragments residue.
Several surgical positions have been applied to improve the SFR of LPS. Zhong et al. recommended FURS in the lateral position for LPS, with a satisfactory SFR 1-month follow-up postoperatively [ 10 ]. Liaw et al. reported that the T-tilt patient position was associated with higher SFR [ 19 ]. In the present study, the untraditional dorsal lithotomy position (30°Trendelenburg) was used to access LPS fragments in the upper or middle pole more easily during the surgery. Meanwhile, a 4.7-Fr double-J stent was inserted into the target renal pelvis preoperatively, as the dilation of the ureter may be helpful to the stone fragments extraction, as well as decrease the risk of ureter injury caused by the access sheath.
In the current series, X-ray negative stones were excluded and KUB was used to evaluate SFR 1 day postoperatively. B-ultrasound was used to assess SFR 3 months later. The overall SFR was 91.1%, with the SFR in the relocation group significantly improved compared to the in situ group. Golomb et al. reported an excellent SFR with the displacement of LPS during RIRS, similar to the present study [ 1 ]. Another study reported the SFR of LPS via RIRS was 85.7% [ 10 ], which might be related to the larger stone size and the short 4-week follow-up. The technique of lithotripsy relied on the stone’s characteristics. Typically, we chose “Dusting” first and fragmentation later, as the stone core was hard and difficult to be dusted. Basket retrieval was routinely performed to help relocate the LPS and improve the immediate SFR in both groups. In our series, most LPS were smaller than 15 mm, and the dilation via the double-J stent before FURS may also contribute to the satisfactory final SFR. All the cases with fragments residue in our study were concentrated on stone burden larger than 15 mm. For such cases, a curved end suctioning access sheath might be an optimal choice, and a PCNL technique, such as super-mini percutaneous nephrolithotripsy (SMP) or ultra-mini PCNL [ 20 , 21 ], is also a good option for LPS with hydrocalycosis.
Regarding operative time and total laser energy consumption, Schuster et al. reported a significantly longer operative time in the relocation group [ 11 ], whereas Yaghoubian et al. reported that the operative time was slightly increased in the relocation group, as well as the laser energy consumption, although with no statistical significance [ 8 ]. A similar result was achieved by Shrestha et al. [ 12 ]. There was also no significant difference between the two groups in our study, due to the improved ergonomics and the popularization of flexible ureteroscopy in the last 2 decades.
The main purpose of stone treatment modalities is to achieve maximum SFR with minimum complications. Our overall complication rate was 10%, which is acceptable and comparable to published data [ 8 – 12 ]. Postoperative fever was the most common complication in our study but no urine-related sepsis or ureteral injury was observed. Additionally, no second-stage procedure was required in either group.
WisQOL is a reliable tool to evaluate the quality of life for patients with renal stones [ 4 , 22 – 25 ]. In our study, patients in the relocation group had a significantly higher WisQOL score than in the in situ group, postoperatively, probably due to the higher SFR in the relocation group. The overall cost of the procedure was not increased with the use of a stone basket and the double-J stent placement before FURS. We used a reusable flexible ureteroscope instead of a single-use scope. Furthermore, the use of the basket and the ureter dilation procedure may help increase the postoperative fragments extraction and the success rate of the one-stage FURS procedure.
This study has several limitations. First, the study sample was small and a further prospective randomized-controlled trial with more cases is required to confirm our findings. Second, KUB and B-ultrasound were used to assess the SFR of LPS instead of NCCT postoperatively, although we excluded the X-ray negative LPS, NCCT is still the most sensitive image modality to assess the SFR. Finally, we did not measure the infundibular characteristics in all patients, so a longer term follow-up may be needed to calculate the SFR in both groups. | Conclusion
The final SFR was significantly higher in the relocation group with an acceptable low complication rate compared to the in situ group, indicating that relocating the LPS to a favorable calyx during FURS is effective and safe for the treatment of 10–20 mm LPS. | Objective
To compare the efficacy and safety of relocating the lower pole stones to a favorable pole during flexible ureteroscopy with in situ lithotripsy for the treatment of 10–20 mm lower pole stone (LPS).
Methods
This study was a prospective analysis of patient outcomes who underwent an FURS procedure for the treatment of 10–20 mm lower pole renal stones from January 2020 to November 2022. The patients were randomized into a relocation group or in situ group. The LPSs were relocated into a calyx, during lithotripsy in the relocation group was performed, whereas the in situ group underwent FURS without relocation. All the procedures were performed by the same surgeon. The patients’ demographic data, stone characteristics, perioperative parameters and outcomes, stone-free rate (SFR), complications, and overall costs were assessed retrospectively.
Results
A total of 90 patients were enrolled and analyzed in this study (45 per group) with no significant differences between the two groups in terms of age, gender, BMI, diabetes, hypertension, stone size, number, laterality, composition, and density. The mean operation time, total energy consumption, postoperative stay, and complications were similar between the groups. Both groups had similar SFR at 1 day postoperative follow-up ( p = 0.091), while the relocation group achieved significantly higher SFR 3 months later (97.8% vs 84.4%, p = 0.026). The relocation group also had a significantly higher WisQol score than the in situ group (126.98 vs 110.18, p < 0.001).
Conclusion
A satisfactory SFR with a relatively low complication rate was achieved by the relocation technique during the FURS procedure.
Supplementary Information
The online version contains supplementary material available at 10.1007/s00345-023-04703-6.
Keywords | Supplementary Information
Below is the link to the electronic supplementary material. | Acknowledgements
The English language of the present study was revised by Dr. Powell, and the authors thank again for his help and support provided.
Author contributions
Ru Huang: project development, data collection, and manuscript editing. Jian-Chen Chen: data collection and project development. Yong-Qiang Zhou: project development and manuscript editing. Jin-Jin Wang: data collection. Chu-Chu Hui: data collection. Min-Jun Jiang: project development, data collection, management, and manuscript editing. Chen Xu: project development, data collection, data analysis, and manuscript writing/editing.
Funding
This study was financially supported by the Priority Disease of Suzhou (LCZX202231) to Min-Jun Jiang and the Youth Project of “Gusu healthy” (GSWS2022110) to Chen Xu.
Data availability
The authors confirm that the data supporting the findings of this study are available within the supplementary files of the article.
Declarations
Conflict of interest
The authors declare that they have no competing interests.
Informed consent
Informed consent was obtained preoperatively from all patients included in our study. | CC BY | no | 2024-01-15 23:41:53 | World J Urol. 2024 Jan 13; 42(1):30 | oa_package/d9/c3/PMC10787685.tar.gz |
PMC10787686 | 37723300 | Introduction
Currently, healthcare system is facing multiple burdens, related to demographic factors, social structure, and a lack of physicians [ 1 , 2 ]. These problems have resulted in considerable impacts on healthcare globally, resulting in increased waiting times, compromised quality of care, and higher costs for patients [ 3 ]. Telemedicine services can shorten this gap by reducing medical visits, saving both the patient’s and the healthcare provider’s time and the cost of the treatment. Furthermore, due to its fast and advantageous characteristics, it can streamline the workflow of hospitals and clinics [ 4 ]. This disruptive technology would make it easier to monitor discharged patients and manage their recovery [ 5 ]. Chatbots are one of the forms of realization of such advantages and may have a beneficial role to play in health care to support, motivate, and coach patients as well as for streamlining organizational tasks [ 6 ]. Among them, chatbots based on artificial intelligence (AI) have shown the best potential in fulfilling a broad range of goals [ 7 , 8 ].
Urology represents a rapidly growing field with the introduction of modern treatments and technologies [ 9 ]. By harnessing the potential of AI-enabled chatbots, urology can embrace digital transformation, optimize resource utilization, deliver patient-centric care, and overcome the challenges posed by different constraints [ 10 ]. Despite the apparent benefits, the use of such chatbots in urology has not yet been explored [ 11 ]. From another perspective, it is important to acknowledge that chatbots come with their own set of concerns and limitations, posing challenges for their safe and effective use in healthcare settings [ 12 ]. This review aims to combine current evidence on the use of AI-based chatbots in urology and to provide valuable insights for future research and implementation in this field. | Material and Methods
Search Strategy
In April 2023, the publication search was done in several databases, including the ACM Digital Library, CINAHL, IEEE Xplore, PubMed, Google Scholar, and https://clinicaltrials.gov/ with the use of the following terms: “AI,””Artificial intelligence,” “Chatbot,” “GPT,” “Decision aid,” “Support,” “Urology,” “Prostate,” “Bladder,” “Kidney,” “Ureter,” “Symptoms,” “Screening,” “Follow-up,” and “Treatment.” According to findings, we structured scenarios where chatbots were used in urology or were discussed to be useful. Moreover, Google Search ( https://www.google.com/ ) was used to define dedicated sources on urological chatbots which are not published.
Inclusion Criteria
Due to lack of data and comprehensive knowledge, we included original and development studies, ongoing trials, reviews, editorial comments, conference abstracts, and book chapters in the English language, without publication date restriction, to better highlight all potentials for the use of AI-chatbots in urology.
Exclusion Criteria
Papers not in English language, description of chatbots not with AI-structure.
Studies Process
Two reviewers (A.T. and N.N.) identified all sources. All studies that appeared to fit the inclusion criteria were included for full review. Each reviewer independently selected and structured studies. If there was disagreement or discrepancy, the senior author (B.K.S.) made the final decision.
Data Extraction and Analysis
We reviewed studies and extracted information related to the scenarios where chatbots were implemented in urology. We also looked for any arguments or evidence supporting the use of chatbots in urological patients. We did not conduct a quantitative analysis of the papers but rather used them to inform our understanding of the different scenarios where chatbots were used and their potential utility in urological field. | Results
Among 567 papers investigated (Fig. 1 ), 15 of these discussed different scenarios to use AI-based chatbots in urology. These included symptom checkers and health screening, patient education and counseling, lifestyle change and conservative management, and clinical decision support and post-treatment follow-up care (Table 1 ).
Symptom Checkers and Health Screening
Chatbots can be a useful tool in assessing symptoms associated with urological diseases. In the presence of mild or non-urgent symptoms, indications for referral to a specialist may be better defined. Conversely, complicated urological diseases can be life-threatening if missed, highlighting the importance of timely and affordable screening.
Kobori et al. [ 13 ] described the AI chatbot for sexually transmitted infections (STIs) screening, including gonorrheal and chlamydial urethritis, syphilis, and condyloma acuminatum. The accuracy rates of gonorrheal urethritis, chlamydial urethritis, syphilis, and condyloma acuminatum were 65%, 70%, 85%, and 95%, respectively. 97.7% of patients thought to visit to clinic earlier after they used a chatbot. Allen et al. [ 14 ] proposed a chatbot on decision-making for prostate cancer screening among African-American men, called Prostate Cancer Screening Preparation (PCSPrep). It significantly increased participants’ prostate cancer knowledge as well as reduced decisional conflict. Owens et al. [ 15 ] performed a paired study and found that iDecide decision-aid chatbots improved participants’ prostate cancer knowledge, self-efficacy in decision-making, and in the use of technology.
Patient Education and Counseling
Chatbots can be used to educate and inform patients with urological diseases, helping to facilitate the routine practice of specialists [ 16 ]. They can ensure patients with information on urological conditions, treatment options, and preventive care measures, thus improving patient knowledge [ 17 ]. Wang et al. [ 18 ] developed SnehAI chatbot to handle with private topics such as safe sex and family planning. SnehAI provides a private and nonjudgmental space for users, offering reliable and relatable information and resources. The study utilizes the Gibson theory of affordances to examine SnehAI’s functionalities, and it demonstrates strong evidence across fifteen functional affordances, including accessibility, multimodality, compellability, interactivity, and inclusivity. PROSCA, developed by Görtz et al. [ 19 ], is a user-friendly medical chatbot specifically focused on prostate cancer (PC) communication. The study aimed to evaluate PROSCA’s effectiveness in providing patient information about early detection of PC. The chatbot proved to be straightforward to use, with a majority of users (78%) not requiring any assistance. Furthermore, 89% of the chatbot users experienced a clear to moderate increase in their knowledge about PC. The participants expressed their willingness to reuse a medical chatbot in the future, highlighting the support for chatbot integration in clinical routines. PROSCA demonstrated its potential in raising awareness, patient education, and support for early PC detection. Khawam et al. [ 20 ] are currently conducting a randomized controlled trial to investigate how usefully conversational ChatBot will provide education and help the patients with urinary incontinence.
Lifestyle Change and Conservative Management
AI-based chatbots hold significant promise for enhancing patient outcomes in urology, particularly for those patients who require adherence to treatment regimens and lifestyle modifications (Fig. 2 ). This is particularly relevant for a range of urological conditions such as urinary incontinence, urolithiasis, and erectile dysfunction, where making necessary changes to diet, exercise, and compliance to medications can significantly improve symptoms and quality of life. Ray et al. [ 21 ] introduced “MenGO,” an integrated cloud-based system for personalized andrological health management. MenGO addresses multiple systemic issues in the field of andrology and men’s sexual wellness by utilizing statistical modeling and a natural language processing (NLP) chatbot. This one-stop solution caters to men suffering from chronic ailments such as erectile dysfunction, infertility, ejaculation problems, and prostate gland issues. The system provides access to affordable physiological and psychological treatments through a smart and interactive telehealth platform powered by cloud and big data analytics. The NLP chatbot integrated into MenGO enhances the overall user experience by facilitating communication and reducing barriers in seeking appropriate healthcare. Kim et al. [ 22 ] developed a patient-centered text message-based platform to promote self-management of symptoms associated with interstitial cystitis/bladder pain syndrome (IC/BPS). The platform consisted of four treatment module categories, namely, patient education and behavioral modification, cognitive-behavioral therapy, pelvic floor physical therapy, and guided mindfulness practices. Supportive messages were delivered through an automated algorithm, enhancing the concept of provider support through shared decision-making and reducing the sense of isolation experienced by patients. This intervention empowered patients to manage their symptoms better, improve self-efficacy, and gain insight into their motivations and behaviors. Chen et al. [ 23 ] are currently conducting a randomized controlled trial to investigate how an AI chatbot intervention can impact the self-management and decision-making confidence of men with lower urinary tract symptoms (LUTS) caused by an enlarged prostate, with or without erectile dysfunction (ED), in the post-COVID-19 era. Patients have the opportunity to access the chatbot for free by scanning a QR code. The chatbot offers self-management guidance on topics including prostate enlargement, urinary symptoms, and erectile dysfunction. Moreover, it provides patient-centered decision-making tools aimed at supporting and empowering patients, particularly in relation to improving urination and erectile function.
Clinical Decision Support
AI-based chatbots in urology can improve decision-making by analyzing patient data and medical records. By leveraging machine learning algorithms, these chatbots can provide evidence-based recommendations for diagnosis and treatment plans (Fig. 3 ). This assists healthcare professionals in interpreting complex urological data, leading to more accurate and efficient clinical decisions [ 10 ]. These AI-based chatbots can consider a wide range of patient variables and provide personalized recommendations based on the specific case at hand. As stated by Gabrielson et al. [ 11 ], they may serve as an essential tool in the urologist’s armamentarium to step away from the computer and turn the physician’s chair back toward the patient. Kim et al. [ 24 ] propose a medical specialty prediction model from patient-side medical question text based on pre-trained bidirectional encoder representations from transformers (BERT). The dataset comprised pairs of medical question texts and labeled specialties scraped from a website for the medical question-and-answer service. The model was fine-tuned for predicting the required medical specialty labels among 27 labels from medical question texts including urology. By analyzing the patient’s symptoms and complaints, chatbots can provide recommendations on which specialists in the field of urology or related areas should be consulted for a comprehensive examination and further diagnosis. This helps streamline the referral process and ensures that patients receive the most appropriate care based on their specific concerns.
Post-treatment Follow-up Care
Chatbots can be highly useful in post-treatment follow-up in urology by providing continuous monitoring and support for patients. They can collect and analyze patient-reported outcomes, such as urinary symptoms or quality of life indicators, and provide personalized recommendations for self-care or additional interventions. As noted by Goldenthal et al. [ 25 ], chatbots could be programmed to follow up with patients after their urology appointments, checking in on their symptoms and providing guidance on self-care. This could be especially beneficial for patients who live far from their urology clinic or who have mobility issues that make in-person follow-up visits difficult. | Discussion
Chatbots have been used in healthcare for decades, with the earliest known healthcare chatbot being ELIZA, created in 1966 [ 26 ]. Since then, chatbots have been utilized in various healthcare settings, from helping patients manage chronic conditions to providing mental health support. There are three main types of healthcare chatbots based on the input processing and response generation method: rule-based model, retrieval-based model, and generative model [ 8 ]. Rule-based chatbots use pre-programmed responses to provide information to patients, while retrieval-based bots offer more flexibility as it queries and analyzes available resources. The generative model produces chatbots based on machine or deep learning to improve their responses over time and is more promising for several reasons. They can understand natural language and learn from user interactions to provide more personalized responses. Additionally, AI-based chatbots can be used to analyze large amounts of patient data to identify patterns and trends that can be used to improve healthcare outcomes [ 27 ]. GPT (generative pre-trained transformer) is a well-known representative of such AI-chatbots and has been the subject of many discussions [ 28 ].
There are ample studies in the literature describing the potential prospects and applications of chatbots in urology. According to our findings, they are already in the focus of urological symptom checking, health screening, patient education, counseling, lifestyle change, conservative management, clinical decision support, and post-treatment follow-up care. The same was shown by Calvo et al. [ 29 ], who conducted a study on the feasibility and usability of a text-based conversational agent that processes a patient’s text responses and short voice recordings to calculate an estimate of their risk for an asthma exacerbation. The chatbot offers follow-up information for lowering risk and improving asthma control, to improve understanding and self-management of the condition. Ferré et al. [ 30 ] developed a chatbot-based tool, called the MyRISK score, which collects self-reported patient data before pre-anesthetic consultation to stratify patients according to their risk of postoperative complications. The tool was developed using the Delphi method and logistic regression analysis, with a machine learning model trained to predict the MyRISK score. The tool was found to be effective in predicting postoperative complications, with high sensitivity (94%) but low specificity (49%).
However, this is not a full list of their potential, which is obvious looking at papers investigating AI-chatbots in different medical fields. They can be also valuable for medical education, pre-operative preparation, and academic writing. So, Han et al. [ 31 ] developed an AI chatbot educational program to promote nursing skills and found that the experimental group showed significantly higher interest in education and self-directed learning compared to the control group. These studies collectively suggest that chatbots hold promise as a valuable tool for medical education. Chetlen et al. [ 32 ] went about the deployment of a chatbot to provide evidence-based answers to frequently asked questions for patients scheduled to undergo a breast biopsy procedure.
By streamlining processes and reducing wait times, chatbots can increase the overall efficiency of the healthcare system, leading to cost savings for healthcare organization [ 33 ]. They can also prevent unnecessary office visits and hospitalizations by providing patients with timely and accurate information and support. Chatbots can increase patient engagement and satisfaction by offering personalized advice, information about their condition, and treatment options [ 34 ]. Studies have shown that chatbots are more engaging and interactive than traditional online forms, despite taking longer to complete [ 35 •]. According to systematic review from Geoghegan et al. [ 36 ], engagement rate for chatbots in the follow-up of patients who have undergone a physical healthcare intervention was up to 97%. In summary, chatbots have the potential to revolutionize the field of urology by improving patient care, optimizing workflow, and increasing the efficiency of the healthcare system.
Technical Limitations
One of the major challenges facing chatbots in urology is their technical limitations. While chatbots have the potential to improve patient care and physician efficiency, they may not always function as intended due to technical issues. For example, chatbots may experience system failures, errors, or glitches that can affect their performance and accuracy [ 37 ]. This can be particularly concerning when it comes to providing medical advice or making diagnoses, as any errors or inaccuracies could lead to serious harm to patients. Therefore, it is essential to thoroughly test chatbots and ensure that they are operating correctly and providing accurate information.
Privacy and Security Concerns
Another challenge that needs to be addressed when using chatbots in urology is privacy and security [ 38 ]. Chatbots may store personal health information, which raises concerns about data privacy and security. Patients need to be assured that their personal information is secure and protected from unauthorized access. Furthermore, any data breaches or security incidents could have significant consequences, including loss of patient trust and legal repercussions. Data privacy and security in health chatbots are still under-researched, and related information is underrepresented in scientific literature [ 39 ] Therefore, chatbots must comply with relevant privacy and security regulations to safeguard patient data.
Reliability and Accuracy
Ensuring reliability and accuracy is one of the most crucial factors for the success of chatbots in urology. To provide trustworthy information to patients and healthcare providers, they must be developed with reliable data sources and algorithms [ 40 ••]. They should also undergo continuous testing and updates to maintain their reliability and accuracy over time. However, if chatbots are not reliable or accurate, they could misinform patients, leading to incorrect diagnoses or treatments. For instance, Ben-Shabat et al. [ 41 ] evaluated the data-gathering function of eight chatbot symptom-checkers and found that the overall recall rate for all symptom-checkers was 0.32 (2280/7112; 95% CI 0.31–0.33) for all pertinent findings. These results suggest that the data-gathering performance of currently available symptom checkers is questionable. As new chatbots become available, hypotheses about their future utility in medicine are limited only by researcher’s imagination. However, their current use should be limited to low-risk tasks with continued human oversight [ 11 ]. Regarding to scientific writing, several ethical issues arise about using these tools, such as the risk of plagiarism and inaccuracies, as well as a potential imbalance in its accessibility between high- and low-income countries, if the software becomes paying. For this reason, a consensus on how to regulate the use of chatbots in scientific writing will soon be required [ 42 •].
Resistance to New Technology
Chatbots in healthcare face the challenge of resistance from patients and healthcare providers who are unfamiliar with new technology or prefer face-to-face interactions. To overcome this challenge, chatbots need to be designed to be user-friendly and easily integrated into existing workflows. Healthcare providers should also receive training on how to effectively use and recommend chatbots. As Goldenthal et al. [ 25 ] indicated, frequent reasons for not activating the chatbot included misplacing instructions for chatbot use, relying on follow-up with clinic or discharge materials, inability to activate chatbot, and inability to text. Moreover, they are not capable of empathy, notably to recognize users’ emotional states and tailoring responses reflecting these emotions. The lack of empathy may therefore compromise the engagement with health chatbots [ 43 ].
Limitations of Our Review
Our review has some limitations that need to be addressed. Firstly, we focused only on AI-based Chatbots, and incorporating other types of chatbots could expand their clinical application. Secondly, we conducted a qualitative assessment of the literature without collecting all identifiable articles, which could have provided a more comprehensive analysis. Thirdly, some scenarios for the use of AI bots could be further subdivided or combined. However, the purpose of our work was to provide an overview of the prospects and limitations of AI-chatbots in urology. By examining the current literature and exploring various use cases, we hoped to provide an analysis of the potential benefits and drawbacks of their implementing. | Conclusion
The use of AI-driven chatbots in urology has the potential to revolutionize the discipline by enhancing patient care, raising physician productivity, and lowering healthcare costs. Chatbots can offer patients individualized assistance, information, and guidance throughout their healthcare journey, empowering patients to better manage their ailments and make educated decisions. Additionally, chatbots can help medical professionals by streamlining repetitive tasks like scheduling appointments, managing medications, and tracking symptoms, giving them more time to concentrate on challenging or complex patient cases. The implementation of chatbots in urology must, however, successfully navigate several obstacles. Obstacles include technical constraints, privacy and security worries, reliability and accuracy problems, and reluctance to new technologies. Chatbots must be created with trustworthy data sources and algorithms, rigorously tested, and frequently updated to maintain their performance and accuracy to accomplish this. To protect patient data, they must also adhere to pertinent privacy and security laws. Furthermore, chatbots must be created to be user-friendly and simple to incorporate into current healthcare workflows.
To employ chatbots to improve patient care, healthcare professionals and patients must receive the proper training and assistance. To provide patients with a more tailored and interesting experience, chatbots should also be able to demonstrate empathy and comprehend their emotional states. Overall, our exhaustive analysis reveals that AI-driven chatbots have the potential to revolutionize urology by enhancing patient care, raising physician productivity, and lowering medical expenses. To ensure their safe and successful application in clinical practice, they must first be carefully evaluated in light of their difficulties, limitations, and ethical considerations. | Purpose of Review
Artificial intelligence (AI) chatbots have emerged as a potential tool to transform urology by improving patient care and physician efficiency. With an emphasis on their potential advantages and drawbacks, this literature review offers a thorough assessment of the state of AI-driven chatbots in urology today.
Recent Findings
The capacity of AI-driven chatbots in urology to give patients individualized and timely medical advice is one of its key advantages. Chatbots can help patients prioritize their symptoms and give advice on the best course of treatment. By automating administrative duties and offering clinical decision support, chatbots can also help healthcare providers. Before chatbots are widely used in urology, there are a few issues that need to be resolved. The precision of chatbot diagnoses and recommendations might be impacted by technical constraints like system errors and flaws. Additionally, issues regarding the security and privacy of patient data must be resolved, and chatbots must adhere to all applicable laws. Important issues that must be addressed include accuracy and dependability because any mistakes or inaccuracies could seriously harm patients. The final obstacle is resistance from patients and healthcare professionals who are hesitant to use new technology or who value in-person encounters.
Summary
AI-driven chatbots have the potential to significantly improve urology care and efficiency. However, it is essential to thoroughly test and ensure the accuracy of chatbots, address privacy and security concerns, and design user-friendly chatbots that can integrate into existing workflows. By exploring various scenarios and examining the current literature, this review provides an analysis of the prospects and limitations of implementing chatbots in urology.
Keywords
Open access funding provided by University of Bergen (incl Haukeland University Hospital) | Author contributions
AT- conception, data collection, data analysis, writing main manuscript NN - conception, data collection, data analysis, writing main manuscript BZH - conception, data collection, data analysis, writing main manuscript PJJ - data analysis, writing and editing, supervision BKS - conception, data collection and analysis, writing and editing, supervision
Funding
Open access funding provided by University of Bergen (incl Haukeland University Hospital).
Data Availability
Data is available on request.
Compliance with Ethical Standards
Ethical Approval
Not required for this research type.
Conflict of Interest
Nil.
Human and Animal Rights and Informed Consent
This article does not contain any studies with human or animal subjects performed by any of the authors. | CC BY | no | 2024-01-15 23:41:53 | Curr Urol Rep. 2024 Sep 19; 25(1):9-18 | oa_package/b2/0c/PMC10787686.tar.gz |
|
PMC10787687 | 37833451 | Introduction
Human epidermal growth factor 2 (HER2) is a transmembrane receptor that plays an integral role in the control of epithelial cell growth and differentiation [ 1 ]. Amplification of this receptor has been reported in many forms of cancer and is generally associated with poor prognosis and increased disease recurrence. Trastuzumab and pertuzumab are two FDA approved humanized anti-HER2 antibodies that are designed to target this receptor. These medications work synergistically by binding to and inhibiting the HER2 receptor, each at different sites, ultimately inducing antibody dependent cell mediated cytotoxicity and tumor death.
The addition of pertuzumab to trastuzumab in a taxane-based regimen demonstrated even greater therapeutic efficacy and is now being widely used for the treatment of HER2-positive breast cancer [ 2 – 4 ]. In addition, this regimen is FDA-approved for use in combination with docetaxel for HER2-positive metastatic breast cancer, or in combination with chemotherapy for adjuvant and neoadjuvant therapies for HER2-positive locally advanced, inflammatory, or early-stage breast cancer [ 5 – 7 ]. To date, there is limited discussion of skin toxicities in the literature, despite up to 20–30% of patients reporting rash and pruritus in clinical trials [ 4 , 6 , 8 ]. A previous systematic review showed that treatment with pertuzumab-based therapy significantly increased risk of rash development, with all grade rash occurring in 24.6% of patients [ 9 ]. Skin and nail infections related to HP plus docetaxel combination therapy have also been previously described [ 10 ]. In our own clinical experience, many patients on these therapies present to dermatology with pruritus, often without concurrent rash. To date, there is no discussion in the literature on this toxicity – therefore, in this single-center study, we seek to characterize the pruritus these patients experience and discuss the clinical presentation and treatment of this bothersome symptom. | Methods
This study was conducted under a MSK institutional review board-approved protocol #16–458. A retrospective chart review of patients receiving both trastuzumab and pertuzumab for the treatment of HER2 positive breast cancer (SEER category 1) from 11/23/2011 to 6/21/2021 was performed at Memorial Sloan Kettering Cancer Center (MSKCC). In total, 2583 patients were on both trastuzumab and pertuzumab during this time. Patients were then narrowed down using a database query, which identified electronic medical records containing the key words “itch”, “pruritus”, and/or “pruritis” within breast or dermatology clinical visit notes and/or patients with itch associated ICD 9/10 codes (n = 1338). Electronic records were then reviewed for documentation of itch or pruritus. Patients who did not have pruritus (n = 842), had pruritus due to other treatments (n = 80) or dermatologic conditions not related to therapy (n = 293), and were on an active pruritus treatment protocol (n = 1) were excluded from the analysis (Fig. 1 ).
Complete blood count, metabolic panels and inflammatory marker data was collected +/- 14 days within onset of pruritus. Only labs relevant to the development of pruritus were included in our results. The onset date of pruritus was determined based either on the first date of documentation or estimated based on the patient history described in the clinical record. Grading of pruritus and rash were based on Common Terminology Criteria for Adverse Events (CTCAE v5.0 for onset after 11/27/2017; corresponding v4.0 for prior onsets). Histopathology of any skin biopsies was examined by a dermatopathologist.
Descriptive statistics were used to describe patient demographics and pruritus characteristics. A figure describing anatomic distribution of pruritus was generated using RStudio. | Results
Demographics
A total of 122 female patients (median age at pruritus onset 54.5, range 28–88) with a diagnosis of HER2 + breast cancer who experienced pruritus attributed to HP from 11/23/2011 to 6/21/2021 were included in this study (Table 1 ). The reported incidence of pruritus at our institution was 4.72%. Most patients were white (68.0%), Asian (12.3%) and black/African American (10.7%). Patients with breast cancer stages I-IV were included. Patients with stage II and stage IV disease comprised the largest groups (38.5% each). Stage I and Stage III disease were less common at 13.9% and 9.0%, respectively. Most patients’ primary regimen was doxorubicin and cyclophosphamide followed by paclitaxel, trastuzumab and pertuzumab (AC THP) (44.3%) or paclitaxel, trastuzumab and pertuzumab (THP) (41.0%). Less common regimens included docetaxel, carboplatin, trastuzumab, and pertuzumab (TCHP) (7.4%) and HP only (2.5%). Vinorelbine was used infrequently as a substitute in patients who did not tolerate initial trials with paclitaxel. Gemcitabine, trastuzumab, and pertuzumab (GHP) and doxorubicin, cyclophosphamide, methotrexate, fluorouracil, paclitaxel, trastuzumab, and pertuzumab (AC CMF THP) regimens were used in one patient (0.8%) each.
At the time of pruritus development, most patients had completed their cytotoxic chemotherapies and were on HP therapy (55.7%). The remaining patients were still receiving concurrent paclitaxel (38.5%), docetaxel/carboplatin (3.3%), vinorelbine (0.8%), gemcitabine (0.8%), and cyclophosphamide/methotrexate/fluorouracil (0.8%). Twenty-seven patients (22.1%) were also on tamoxifen or aromatase inhibitors for the treatment of their breast cancer at the time of pruritus onset.
Cutaneous toxicity
On average, patients experienced pruritus 319.0 days (8-3171) after initiation of HP combination therapy. The anatomic distribution of symptoms was described in 92 (75.4%) of our patients (Fig. 2 ). Among them, the most affected areas were the upper extremities (67.4%), back (29.3%), lower extremities (17.4%), shoulders (14.1%), chest (14.1%), and neck (8.7%). Less common sites of involvement included the torso (7.6%), face (6.5%), scalp (4.3%), and axillae (10.9%). Forty-two patients (34.4%) had documentation of pruritus grade. Among them, grade 2 (61.9%) was the most common followed by grade 1 (35.7%) and grade 3 (2.4%). Eighteen patients (14.8%) experienced a concurrent rash. These were maculopapular (50%), eczematous (22.2%), and acneiform (27.8%) in morphology (Table 2 ).
Laboratory values
Complete blood counts (CBC) were obtained from 105 patients (86.1%). Hemoglobin was under the normal level in 42 (40%) of patients. Platelet counts were above the normal range in 9 patients (8.6%). One hundred and two (83.6%) had absolute eosinophil levels measured as part of their CBC. One of these patients had an elevated level of 1.2.
Metabolic panels were performed in 84 (68.9%) patients. Glomerular filtration rate was decreased below the normal range ( > = 60 mL/min/1.73m 2 ) in 5 (6.0%) patients and ranged from 25 to 52 mL/min/1.73 2 . Alkaline phosphatase was measured in 84 (68.9%) patients; 5 patients (6.0%) had levels above the normal range. Aspartate transaminase (AST) and alanine transaminase (ALT) levels were measured in 80 patients (65.6%). AST was elevated above normal levels in 8 patients (9.5%) and ALT was elevated above normal in 2 patients (2.4%).
Of those patients with circulating cytokine levels, four patients (3.3%) had interleukin 5 (IL5) levels measured and all except one patient, who had a level of 4.5 pg/mL, were within the normal range. Fifteen patients (12.3%) had their Immunoglobulin E (IgE) levels measured. Only 3 patients (20.0%) had elevations (Table 3 ).
Histopathology
Four patients received a skin biopsy; three biopsies were available for review. One biopsy showed subtle vacuolar interface changes, one showed a sparse superficial perivascular lymphocytic infiltrate with slight edema and few mast cells (suggestive of an urticarial reaction), and one biopsy showed non-specific features of excoriation with subtle background changes including rare dyskeratotic keratinocytes and slight spongiosis. One biopsy not available for review reportedly showed features of a dermal hypersensitivity reaction. While the biopsy findings are nonspecific, the features are compatible with drug eruptions (Fig. 3 ).
Treatment
Fifty-one patients (41.8%) had some treatment for pruritus given by their oncology team (Table 4 ). Among these patients, antihistamines were the most used (45.1%), followed by topical steroids (39.2%) and emollients (35.3%). Less commonly, systemic steroids (7.8%), gabapentinoids (3.9%), topical anti-itch lotions (3.9%), topical anesthetics (3.9%), and acupuncture (2.0%) were given. The mean number of treatments given by oncology was 1.43 (SD 0.69). Sixty-seven patients (54.9%) were referred to and treated by dermatology. The most common treatments given were topical steroids (71.6%), gabapentinoids (40.3%) and antihistamines (34.3%). Less often, patients received biologics (omalizumab and dupilumab) (13.4%), immunomodulators (11.9%), topical anesthetics (11.9%), topical anti itch lotions (7.5%), ammonium lactate (3.0%), cholestyramine (1.5%), and phototherapy (1.5%). The mean number of treatments given by dermatology was 1.91 (SD 1.25). In total, 101 patients (82.8%) received treatment from oncology and/or dermatology. Of them, thirty-nine (38.6%) were prescribed or directed to use topical medications only. Seventy-four of these patients (60.7%) had descriptions of response to treatment, with 67 (90.5%) experiencing some improvement. The remaining 7 patients (9.5%) did not experience improvement from intervention by oncology or dermatology. Of these, 2 (28.6%) discontinued cancer treatment (1 discontinued their regimen entirely, 1 discontinued pertuzumab only), 2 patients (28.6%) skipped a dose of their regimen until pruritus symptoms abated, and 3 patients (42.8%) experienced improvement in symptoms after completion of their HP regimen as scheduled. Both patients who discontinued HP were switched to a different regimen and were not rechallenged later with HP.
The most effective treatments given by oncology and/or dermatology were also determined from clinical documentation. Thirty-seven patients (50%) experienced improvement with topicals only. Improvement in symptoms was attributed to topical steroids (52.2%), antihistamines (29.9%), emollients (20.9%), gabapentinoids (16.4%), biologics (4.5%), systemic steroids (3.0%), acupuncture (1.5%), topical anti-itch lotions (1.5%), ammonium lactate (1.6), topical anesthetics (1.5%), and phototherapy (1.5%). | Discussion
In this single center study, we found a reported pruritus incidence of 4.72% among patients treated with HP. This is lower than what has been reported in clinical trials, which have ranged from 11 to 17.6% [ 8 , 11 , 12 ]. The discrepancy may be due to our study design. We were limited to data that could be gathered from the electronic medical record, and therefore our results were likely subject to patient underreporting or provider under-documentation. Despite these deficits, it is possible that our results more accurately reflect the true burden of HP associated pruritus. In clinical trials, patients are systematically assessed with questionnaires and may report mild or unrelated symptoms that would otherwise not be reported in the standard clinical setting. As such, we expect that, although patients with very mild or self-resolving cases may not have been included in our cohort, we adequately characterized patients with symptoms requiring intervention, which is likely more relevant for real world management.
Pruritus among these patients tends to be low grade - only one patient in our cohort experienced a high-grade toxicity. This is consistent with what has been reported in the literature [ 8 , 11 , 12 ]. The most common severity we observed was grade 2, which indicates that despite being low grade, these symptoms may nevertheless lead to limitations in instrumental activities of daily living (ADLs) [ 13 ]. Pruritus, even without any associated skin eruption, has been shown to significantly reduce quality of life [ 14 – 16 ]. Fortunately, symptoms among our patients appeared to be manageable with interventions by oncologists and dermatologists, with only 4 (3.45%) requiring treatment interruption or discontinuation. In addition, a large subset of our cohort was managed effectively with topical interventions only. Topical steroids, oral antihistamines, emollients and gabapentinoids appeared to have the most success, which is consistent with management recommendations for pruritus secondary to other targeted therapies [ 16 ]. Among our cohort, first generation antihistamines were most often prescribed. Nighttime dosing of these medications was often recommended and may be beneficial given the risk of associated drowsiness.
Unlike cutaneous toxicities associated with other targeted therapies, which typically emerge within the first few cycles or months of therapy, pruritus associated with HP therapy tended to present late [ 17 , 18 ]. While the mechanism of this is not entirely clear, we speculate that the administration of chemotherapies prior to HP alone may play a role. One hundred and thirteen patients (97.4%) received some cytotoxic chemotherapy, prior to and/or concurrent with HP therapy. The use of these chemotherapies can induce a state of immunosuppression which may impede or delay the development of pruritus or rash. Paclitaxel use was especially common in our cohort, with 93.1% using prior to or concurrent with HP alone. The relationship between paclitaxel and the immune system is a matter of current study, but previous investigators have found that this therapy can inhibit T cell (and therefore autoreactive T cell), B cell or inflammatory cell activation and proliferation, thereby exhibiting autoimmune effects [ 19 , 20 ].
The mechanism of pruritus development in these patients has not been determined. Absolute eosinophils, IL5 and IgE were within normal range for most of our patients, but it is difficult to draw conclusions from this as circulating levels may not correlate with those in the skin. In considering other etiologies of itch in our cohort, we evaluated for causes of generalized pruritus including uremia, cholestasis, thrombocythemia and iron deficiency anemia. Only a small subset of patients had evidence of impaired renal or liver function, and thrombocytosis was similarly uncommon. Hemoglobin levels were below normal in 39 patients, suggesting the potential role of iron deficiency anemia in the development of itch symptoms among our cohort. Iron tests were not obtained from our patients, but iron deficiency anemia occurs frequently among patients with solid tumors [ 21 , 22 ]. Histopathologic analysis from our cohort was nonspecific, and, as standard of care, tended to favor patients with concomitant rash. As such, these findings may not be representative of the larger cohort, most of whom did not have any associated eruption.
A notable finding of our study was the anatomic distribution of pruritus, which predominantly affected the upper extremities. In our clinical experience, patients often present with a distribution of itch akin to that seen in brachioradial pruritus, which involves the C5 and C6 dermatomes. Brachioradial pruritus is believed to be due to a combination of cervical nerve irritation and ultraviolet radiation, though its mechanism has yet to be fully elucidated [ 23 ]. There is one case in the literature describing a patient with breast cancer (on treatment with HP) who developed brachioradial pruritus. In this particular patient, the development of symptoms was attributed to metastatic disease to her cervical spine [ 24 ]. In our cohort, the presence of cervical disease (either metastases or other degenerative pathology) was only documented for 2 patients (1 metastases and 1 degenerative). However, the true number of patients with cervical pathologies or degenerative changes is likely higher than this, especially considering the advanced age of our cohort. Patients in our study were only imaged at our center for evaluation of cancer progression, so degenerative changes may have been underreported or not detected.
In addition to cervical nerve irritation, we can also speculate that there may be some component of photosensitivity in our patients, but there are no existing reports in the literature of photosensitivity due to anti-HER2 therapies specifically. There are, however, reports of photosensitive eruptions secondary to EGFR inhibitors, and both pertuzumab and, to a lesser degree, trastuzumab have been shown to interact with EGFR as heterodimers [ 25 – 27 ]. The disruption of this process attenuates EGFR signaling, and while functional HER2 heterodimers have not been found in human skin, the clinical similarities between anti HER2 and anti EGFR eruptions suggest there may nonetheless be some functional interaction [ 9 ]. However, patients in our cohort did not experience pruritus preferentially in the summer months, as may be expected with a photosensitive process. In addition, skin and nail infections secondary to HP therapy appeared histologically similar to EGFR associated cutaneous toxicities, suggesting some shared pathologic process [ 10 ]. Further investigation into the origin of this pruritus distribution is needed.
There were some limitations to our study. Many patients did not have a full description of symptoms in their EMR, and therefore we may not have fully captured their histories or treatments. In addition, the descriptive nature of this study limited the analyses that could be performed. We did not have a comparator group, and therefore were unable to comment on patient characteristics or disease features that were associated with the development of pruritus. We were also unable to evaluate the relationship between pruritus development and tumor response to HP. The development of cutaneous toxicities from targeted therapies as a predictor for prognosis and response to treatment has long been a point of interest. This relationship is most well described among patients on immune checkpoint inhibitors, but a similar phenomenon was previously demonstrated among patients receiving anti-HER2 therapies. Using data from the CLEOPATRA study, investigators found that occurrence of pertuzumab rash was associated with improved prognosis for both progression free survival and overall survival [ 23 ]. The relationship between pruritus specifically and prognosis was not explored in this study and poses a potential subject for future investigations. | Purpose
The combination of trastuzumab and pertuzumab (HP) as part of a taxane-based regimen has shown benefit in the adjuvant and metastatic HER2 + breast cancer setting. In the CLEOPATRA trial, pruritus was reported in 11-17.6% of patients. The clinical phenotype and potential treatment strategies for this event have not been reported.
Methods
A retrospective review of 2583 patients receiving trastuzumab and pertuzumab for the treatment of HER2 + breast cancer from 11/23/2011 to 6/21/2021 was performed at Memorial Sloan Kettering Cancer Center (MSKCC). Patient demographics, pruritus characteristics, and treatments as documented in the electronic medical record (EMR) were included in this analysis.
Results
Of 2583 pts treated with HP, 122 (4.72%) with pruritus were identified. On average, patients experienced pruritus 319.0 days (8-3171) after initiation of HP. The upper extremities (67.4%), back (29.3%), lower extremities (17.4%), and shoulders (14.1%) were the most commonly affected regions. Grade 1/2 pruritus (97.6%) occurred in most cases. Patients responded primarily to treatment with topical steroids (52.2%), antihistamines (29.9%), emollients (20.9%), and gabapentinoids (16.4%). Of those with pruritus, 4 patients (3.3%) required treatment interruption or discontinuation.
Conclusions
Pruritus is uncommon in patients on trastuzumab and pertuzumab, generally a chronic condition, with gabapentinoids or antihistamines representing effective therapies.
Keywords | Acknowledgements
We would like to thank Joseph Schmeltz for performing the database inquiry for this study.
Author contributions
Sarah Noor and Mario E Lacouture contributed to the study conception and design. Data collection was performed by Stephanie Gu. Data analysis and table/figure creation was performed by Stephen Dusza and Stephanie Gu. The manuscript was primarily written by Stephanie Gu. Andrea Moy examined histopathologic slides and reported the findings in the manuscript. Elizabeth Quigley, Helen Haliasos, Alina Markova, Michael Marchetti, Andrea P. Moy, Chau Dang, Shanu Modi, and Diana Lake all reviewed and contributed to the manuscript. All authors have read and approved the final manuscript.
Funding
This research was funded in part through the through the NIH/NCI Cancer Center Support Grant P30 CA008748.
Data Availability
The datasets are not publicly available but can be provided for researchers who request it from the authors.
Declarations
Competing interests
Alina Markova receives research funding from Incyte Corporation and Amryt Pharma; consults for ADC Therapeutics, Alira Health, Protagonist Therapeutics, OnQuality, and Janssen; and receives royalties from UpToDate. Mario E Lacouture has a consultant role with Johnson and Johnson, Novocure, Janssen, Novartis, Deciphera, Kintara, RBC/La Roche Posay, Trifecta, Genentech, Loxo, Seattle Genetics, Lutris, OnQuality, Roche, Oncoderm, Apricity. Mario E Lacouture also receives research funding from Lutris, Paxman, Novocure, OQL, Novartis and AZ, and is funded in part through the NIH/NCI Cancer Center Support Grant P30 CA008748. Shanu Modi receives research funding from Genentech, Daiichi Sankyo, AstraZeneca, and Seattle Genetics. Shanu Modi also receives honoraria/consulting/advisory fees from Genentech, Daiichi Sankyo, AstraZeneca, Seattle Genetics, Gilead, Macrogenics, Novartis, GlaxoSmithKline, and Zymeworks. Chau Dang receives research support from Roche/Genentech and Puma, consulting support from Novartis, Pfizer and Seagen, and honorariums from Gilead and Daiichi. Elizabeth Quigley receives royalties from UptoDate.
Ethics approval and consent to participate
This is an observational study. The Memorial Sloan Kettering Cancer Center Research Ethics Committee has confirmed that no ethical approval is required. Informed consent was obtained from all participants in the study. The authors affirm that human research participants provided informed consent for publication of the images in Fig. 3 . | CC BY | no | 2024-01-15 23:41:53 | Breast Cancer Res Treat. 2024 Oct 13; 203(2):271-280 | oa_package/fa/58/PMC10787687.tar.gz |
||
PMC10787689 | 37450232 | Introduction
Language is a principle characteristic of humanity (Staats 2012 ). As a minimally meaningful unit in language, a word comprises an association between perceptually linguistic (i.e., written and spoken word forms, Braille, fingerspelling, and so on) and perceptually and emotionally referential information (Kambara et al. 2020 ; Paivio 1986 ). The association between linguistic and referential information can be arbitrary (e.g., de Saussure 1959 ) and nonarbitrary (e.g., Blasi et al., 2016 ). An evidence-based theory suggests that the linguistic and referential features were processed in two differential systems, including verbal and nonverbal systems (e.g., Paivio 1986 ). Many previous studies have reported that the association between linguistic and referential information can be learned through our perceptual and emotional experiences in the first and second languages (Breitenstein et al. 2005 ; Carpenter and Olson 2012 ; Cornelissen et al. 2004 ; Grönholm et al. 2005 ; Havas et al. 2018 ; Hawkins et al. 2015 ; Hawkins and Rastle 2016 ; Horinouchi et al. under revision; Hultén et al. 2009 ; Jeong et al. 2010 ; Kambara et al. 2013 ; Lee et al. 2003 ; Li et al. 2020 ; Liu et al. 2021 ; Takashima et al. 2017 ; Tsukiura et al. 2010 , 2011 ; Yan et al. 2021 ; Yang et al. 2021 ). Other studies have found that participants could condition subjective evaluations (unconditioned responses: UCR) of real words and perceptual stimuli (unconditioned stimuli: UCS or US) to those (conditioned responses: CR) of other real words, pseudowords, symbols, and other perceptual stimuli (conditioned stimuli: CS, e.g., Barnes-Holmes et al. 2000 ; Cicero and Tryon 1989 ; Hughes et al. 2018 ; Paivio 1964 ; Staats and Staats 1958a ; Staats et al. 1961 ; Staats and Staats 1957 ; Till and Priluck 2000 , 2001 ; Tryon and Cicero 1989 ; Valdivia-Salas et al. 2013 ; for review, De Houwer et al. 2001 ; Jaanus et al. 1990 ). The phenomenon, in which people condition evaluations of stimuli to those of other stimuli, has been termed evaluative conditioning (Hofman et al. 2010 ). Evaluative conditioning can be conducted in three emotionality dimensions, including evaluation (valence: ratings from pleasant to unpleasant ), activity (arousal: ratings from active to passive ), and potency (dominance: ratings from strong to weak ). These dimensions are associated with semantic differential methods (Osgood et al. 1957 ) and Self-Assessment Manikin (Bladley and Lang 1984). The verbally evaluative conditioning (learning) can affect the evaluative meanings (evaluative responses) of words (Jaanus et al. 1990 ). As pioneers of the verbally evaluative conditioning, Staats and Staats ( 1957 ) showed that native English speakers conditioned subjective evaluations (meanings) of spoken English words to those of English pseudowords in the three dimensions (evaluation, potency, and activity). Basically, the pseudowords as linguistic information would not be associated with any sensorimotor or emotional features as nonverbal information in the mental lexicon (e.g., Fritsch and Kuchinke 2013 ). However, the pseudowords may include sound symbolic effects (e.g., Davis et al. 2019 ; Lupyan and Casasanto 2015 ; Spence and Gallace 2011 ). Additionally, previous research shows that participants conditioned subjective evaluations of real English words to those of other real English words (e.g., Staats and Staats 1958a ; Staats and Staats 1958b ). These results suggest that the evaluative conditioning can reorganize the associations between the linguistic and referential information in the mental lexicon. The reorganization of the mental lexicon indicates that the affective meanings of words would be reconfigured by the evaluative conditioning. Evaluative conditioning affects the lexical access of words (Kuchinke and Mueller 2019 ) because lexical access is faster to affective than neutral words (Kissler and Herbert 2013 ). The effects of evaluative conditioning were higher for the supraliminal (conscious) than for the subliminal (unconscious) presentation of UCS (US) and for self-report than for implicit methods (Hofmann et al. 2010 ). A study of a non-alphabetic language showed that people can unconsciously (subliminally) condition words to other words (Galli and Gorn 2011 ). However, many studies of alphabetic languages show the same (e.g., De Houwer et al. 1994 , 1997 ; Dijksterhuis 2004 ). A recent study of Brazilian samples has also reported that although the subliminally and verbally evaluative conditioning did not affect CS evaluation, people exposed to eating-related words (CS) and positive word pairings had increased saliva compared with those conditioned to CS and negative word pairings (Passalli et al. 2022). Most of the previous findings suggest that native speakers of alphabetic languages could associate subjective evaluations (meanings) of UCS, including alphabetically real words, to those (meanings) of CS, including alphabetic pseudowords. However, to the best of our knowledge, no study examines whether native speakers of non-alphabetic languages can consciously condition words to other words, including pseudowords, as shown in Staats and colleagues’ studies in English (Staats and Staats 1957 ; Staats et al. 1959 ). A recent study has shown that although the verbally and consciously (supraliminally) evaluative conditioning occurred in an alphabetic native language (Spanish) and non-alphabetic second languages (Chinese and Japanese), the effect of evaluative conditioning was higher for the native language than for the second languages (Vidal et al. 2021 ). Because native speakers of alphabetic languages can consciously condition evaluative responses of words to other words in both the alphabetically native and non-alphabetical second languages, the native speakers of a non-alphabetical language would also condition evaluative responses (meanings) of words as UCS (e.g., positive/negative, active/passive, or strong/weak meanings of words; Staats and Staats 1957 ) to those of pseudowords as CS by consciously evaluative conditioning in non-alphabetic languages. Regarding evaluative conditioning of language, one of the important things would be linked to whether or not participants have evaluative responses to words used as UCS. Additionally, the features of evaluative responses of words used as UCS (i.e., evaluation, potency, and activity) would be essential for the evaluative conditioning of language. However, differences among first (native) languages would not influence the evaluative conditioning of language for native speakers. Nonetheless, differences between first and second languages would affect the evaluative conditioning of language (Vidal et al. 2021 ). This study aims to clarify whether native speakers of a non-alphabetic language (Japanese) condition the subjective evaluations of spoken Japanese words to those of written Japanese pseudowords. Japanese language is among the many non-alphabetic languages. Before this study, we predicted that native Japanese speakers condition the meanings of spoken Japanese words as UCS to those of written Japanese pseudowords used as CS based on previous research of evaluative conditioning of language (language conditioning, e.g., Staats 2012 ). The Japanese writing system includes katakana (カタカナ), hiragana (ひらがな), and kanji characters (漢字), with roman letters (ローマ字), numbers, and symbols also being used (Yamaguchi 2007 ). In Experiments 1 and 2, we used katakana, which is among the original and non-alphabetic characters in Japanese, as written CS. | Methods
We examined whether subjective evaluations of written Japanese pseudowords were conditioned to those of spoken Japanese words. First, we conducted a survey study (Survey 1) in which participants evaluated whether each presented word was positive or negative, active or passive, and strong or weak. The Japanese words were translated from the English words in Staats and Staats’s study (1957). In Survey 1, we used five-point scales for preliminary checking of the word evaluations. Second, after selecting the word stimuli, we conducted two behavioral experiments (Experiments 1 and 2). In Experiment 1, we examined whether subjective evaluations of positive and negative words were conditioned to those of pseudowords. In Experiment 2, we examined whether subjective evaluations of active and passive words were conditioned to those of pseudowords. Lastly, a post-hoc survey (Survey 2) was conducted to specify evaluations (both the positive/negative and active/passive ratings) of all the words used in Experiments 1 and 2 because Survey 1 only specified positive/negative ratings of positive and negative words for Experiment 1 and active/passive ratings of active and passive words for Experiment 2. Survey 2 clarified all the ratings of the words used in Experiments 1 and 2. The surveys (Surveys 1 and 2) and behavioral experiments (Experiments 1 and 2) according to the Declaration of Helsinki were approved by the ethical committee of the Graduate School of Humanities and Social Sciences at Hiroshima University. Additionally, we obtained informed consents from participants who pressed a key to approve each participation after reading an explanation of the surveys (Surveys 1 and 2) and behavioral experiments (Experiments 1 and 2).
Survey 1
We conducted Study 1 to select Japanese real words translated from English real words used by Staats and Staats ( 1957 ) and Staats et al. ( 1959 ). Based on Cronbach’s alphas in each category of real words, we selected Japanese real words associated with negative and positive features for Experiment 1 and those associated with passive and active features for Experiment 2. However, we decided not to use Japanese real words associated with weak and strong features for a behavioral experiment since the Cronbach’s alphas of these words did not show sufficient reliability (Cronbach’s alphas) higher than 0.70 (Cortina 1993 ).
Participants
Eighty-four undergraduate students, who attended a psychological lecture, participated in this survey study (26 females; M age = 19.21; SD age = 1.37). All the participants were healthy native Japanese speakers.
Stimuli
We translated 97 Japanese real words used by Staats and Staats ( 1957 ) and Staats et al. ( 1959 ) to Japanese real words using a dictionary (Minamide 2014 ). The previous studies (Staats and Staats 1957 ; Staats et al. 1959 ) collected most of the relevant words from Osgood and Suci ( 1955 ), who showed that the bipolar word pairs (e.g., old and young as bipolar meanings of age) were classified into three factors (evaluation, activity, and potency). Of the Japanese real words, 34 were associated with negative or positive features, 28 with passive or active features, and 35 with weak or strong features (Osgood and Suci 1955 ; Osgood et al. 1957 ; Staats and Staats 1957 ; Staats et al. 1959 ). The part of speech (word class) of some words used in the previous and current studies differs. Appendix A shows the full word list, including written Japanese words, the English meanings, written romaji (alphabetically written words for the pronunciations), the number of moras, and frequencies of the words shown in a Japanese psycholinguistic database (Amano and Kondo 2000 ; see Appendix A).
Procedures
Participants judged what the presented words refer to and responded to a Google Form on their personal computers. They performed three types of judgments. First, as positiveness ratings, they evaluated whether the presented words had negative or positive features using a 5-point semantic differential scale from 1 ( negative ) to 5 ( positive ). Second, as activeness ratings, they judged whether the presented words had passive or active features using a 5-point semantic differential scale from 1 ( passive ) to 5 ( active ). Third, as strongness ratings, they evaluated whether the presented words had weak or strong features using a 5-point semantic differential scale from 1 ( weak ) to 5 ( strong ).
Analyses
To examine the reliability of words associated with negative, positive, passive, active, weak, or strong features, we calculated Cronbach’s alpha using IBM SPSS 26. | Results
Cronbach’s alphas of negative ( α = 0.91), positive ( α = 0.79), passive ( α = 0.90), and active words ( α = 0.80) were greater than 0.70 (Cortina 1993 ); however, we could not detect the sufficient reliability of weak and strong words higher than 0.70 (Cortina 1993 ). Therefore, we decided to use 28 Japanese real words associated with negative and positive features for Experiment 1 (14 each) and 26 Japanese real words associated with passive and active features for Experiment 2 (13 each). | Discussion
Experiment 1 investigated whether written Japanese pseudowords are conditioned to spoken Japanese positive or negative words. Two findings emerged: first, there was no significant difference among evaluative responses of conditions (evaluative responses of pseudowords conditioned to positive, negative, and neutral words) before conditioning; however, after conditioning, evaluative responses of pseudowords conditioned to positive and neutral words were higher than those conditioned to negative words. Second, after conditioning, pseudowords conditioned to positive words were higher (more positive) than before, while those conditioned to negative words were lower (more negative) than before. These findings are congruent to previous findings (e.g., Staats and Staats 1957 ). These results suggest that native Japanese speakers condition positive and negative evaluations of spoken Japanese words to those of written Japanese pseudowords. However, evaluative reponses of pseudowords conditioned to neutral words before and after conditioning differed insignificantly. This result indicates that the neutral words were functioned as neutral stimuli in Experiment 1. The experimental findings in Experiment 1 suggest that the evaluative responses of pseudowords conditioned to neutral words would not be associated with the mere-exposure effect or familiarity effect by which participants tend to prefer familiar things (Zajonc 1968 , 2001 ; Monahan et al. 2000 ). The neutral words selected from a previous study (Higami et al. 2015 ) would be familiar to the participants. For example, words that people might use often (e.g., traffic light, home address, and road) are familiar words. Psycholinguistic studies have shown that the familiarity of verbal information positively correlates with the emotional valence (preference) of verbal information (e.g., Ando et al. 2021 ; Citron et al. 2014 ). In conjuction, these findings suggest that Japanese pseudowords could be conditioned to positive and negative words. | This study aimed to examine whether Japanese participants condition spoken words’ meanings to written pseudowords. In Survey 1, we selected spoken words associated with negative ( α = .91) and positive ( α = .79) features for Experiment 1 and passive ( α = .90) and active ( α = .80) features for Experiment 2. In Experiment 1, participants evaluated four written pseudowords’ emotional valence using a 7-point semantic differential scale (1: negative ; 7: positive ) before and after conditioning spoken words with negative, neutral, or positive features to each pseudoword. In the conditioning phase, participants read each pseudoword, listened to a spoken word, and verbally repeated each spoken word. The results showed that a pseudoword was conditioned to spoken words with positive and negative features. In Experiment 2, participants evaluated four pseudowords’ activeness using a 7-point semantic differential scale (1: passive ; 7: active ) before and after conditioning spoken words of passive, neutral, and active features to each written pseudoword. In the conditioning phase, the participants read each written pseudoword, listened to a spoken word, and repeated the spoken word. The results showed that the activeness evaluations were more increased for pseudowords conditioned to spoken words of active and neutral features after conditioning than before conditioning but were unchanged for a pseudoword conditioned to those with passive features before and after conditioning. Additonally, Survey 2’s results showed that although the positiveness and activeness responses of the words used in Experiments 1 and 2 were controlled well, the lack of significant differences among positiveness responses of words may influence the evaluative conditioning in Experiment 2. That is, when participants condition passive (low arousal) words’ activeness (arousal) ratings to those of pseudowords, words’ positiveness (valence) ratings would be important in the evaluative conditioning. Our findings suggest that participants can condition spoken word meanings of preference and activeness to those of written pseudowords. It also indicates that linguistically evaluative conditioning’s effects are robust in a non-alphabetic language.
Keywords | Experiment 1
Participants
There were 37 participants (21 females; M age = 36.24; SD age = 7.35); all of them were native Japanese speakers. They were recruited from a crowdsourcing company in Japan (CrowdWorks, Inc.). The participants received a monetary reward of 220 Japanese yen, including the incidentals.
Experimental paradigm
Experiment 1 employed a 2 × 3 within-subject factorial design. The dependent variables were evaluative responses to Japanese pseudowords (1: negative ; 7: positive ), and the independent variables were time (1: before conditioning; 2: after conditioning) and conditions (1: conditioning to positive words; 2: conditioning to negative words; 3: conditioning to neutral words).
Experimental devices
An IC recorder (ICD-SX1000) was used to prepare spoken Japanese real words. Additionally, we used an online platform for behavioral experiments (gorilla.sc) to record the participants’ responses.
Stimuli
We used four written pseudowords selected from psycholinguistic research on pseudowords (Umemoto et al. 1955 ), namely, wayu ( ワユ ), sohi ( ソヒ ), nuyo ( ヌヨ ), and rehe ( レヘ ). The four Japanese written pseudowords were presented with katakana characters which are Japanese characters that encompass a consonant and vowel or a vowel only (Yamaguchi 2007 ). In this study, we only used katakana characters that consisted of a consonant and a vowel; pseudowords consist of two katakana characters (Umemoto et al. 1955 ).
Additionally, we used 28 spoken Japanese positive (e.g., love) and negative words (e.g., fear) based on the results of Survey 1 (Appendix B). The results of Survey 1 showed that the mean positiveness scores of selected positive words ranged from 4.30 to 4.85, whereas the mean positiveness scores of selected negative words ranged from 1.21 to 2.07 (see Appendix B). We also used 28 spoken Japanese neutral words (e.g., lectures) based on a previous study, in which the authors detected Japanese real words associated with neutral emotional features (Higami et al. 2015 ). Another previous study (Staats and Staats 1957 ) used content words (e.g., nouns and adjectives) as positive and negative words and function words (e.g., prepositions such as “with”) as neutral words. However, since the differences in lexical categories (content words vs. function words) might have affected the previous study’s results, we decided to use only content words for all the spoken Japanese real words. A female native Japanese speaker (the first author) uttered the spoken Japanese real words (positive, negative, and neutral words) for recoding. We recorded the spoken Japanese real words in a quiet room to control for environmental sounds and noise.
Procedures
The experiment consisted of a first evaluation phase (an evaluation task before conditioning), a conditioning phase, and a second evaluation phase (an evaluation task after conditioning). The three phases combined were of approximately 15 min. This experimental flow was approximately based on a previous study (Staats et al. 1959 ) that decreased the number of trials from Staats and Staats ( 1957 ). In their research, Staats and colleagues asked participants to rate pseudowords after the conditioning phase by using one of the scales linked to evaluation (valence: ratings from pleasant to unpleasant ), activity (arousal: ratings from active to passive ), or potency (dominance: ratings from strong to weak ) in their three experiments. However, the previous studies (Staats and Staats 1957 ; Staats et al. 1959 ) did not include the first evaluation phase used in this study. Before the experiment, we stated “Please set your keyboard return to half-width characters and numbers, because you need to use the half-width numbers for your answers. Please set up your computer to allow you to hear the audio. We recommend that the volume of the sound is at least half of what you can hear clearly. To repeat spoken sounds verbally, please conduct the experiment in an environment where you can speak up. After you complete the settings, please press the space key to proceed to the experiment.” In both the evaluation phases (before and after conditioning), participants evaluated emotional features to four written Japanese pseudowords, namely, wayu ( ワユ ), sohi ( ソヒ ), nuyo ( ヌヨ ), and rehe ( レヘ ) using a 7-point semantic differential scale from 1 ( negative ) to 7 ( positive ; Osgood et al. 1957 ; Staats and Staats 1957 ; Staats et al. 1959 ). In each evaluation phase, participants pressed a key from one to seven on their keyboards. In the evaluation phases, each pseudoword presentation depended on participants’ responses. Furthermore, each stimulus was followed by a cross mark ( +) shown for 400 ms, involving 100 ms pauses before and after the cross mark. Before the first and second evaluation phases, we showed stated: “Please rate how negative or positive you feel about the katakana words displayed on the screen by pressing the keys 1 ( negative ) to 7 ( positive ) on the keyboard.” As a trial in the conditioning phase, we presented a written Japanese pseudoword for 5000 ms followed by a fixation (a cross mark: +) for 1200 ms, including 100 ms pauses before and after the fixation, and a spoken Japanese word selected from Survey 1. After presenting each spoken Japanese word, we asked the participants to repeat it verbally once. After producing the spoken word once, they pressed the key at their own pace. Before the conditioning phase, we stated: “Please watch the katakana words that will appear on the screen carefully, and carefully listen to and repeat the Japanese words that will be auditorily presented. Please press the space key to advance to the next task.” These methods were approximately similar to that of previous studies (Staats and Staats 1957 ; Staats et al. 1959 ). In all the phases (a first evaluation, conditioning, and second evaluation phases), the stimulus color was black, while the background color was white.
We counterbalanced word lists of the Japanese pseudowords and spoken Japanese real words between participants to control the effects of stimuli (De Houwer et al. 2001 ), such as the sound symbolic effects of stimuli, which linguistic features (e.g., spoken sounds) nonarbitrarily connect to referentially perceptual and emotional features (e.g., Ando et al. 2021 ; Kambara and Umemura 2021 ; Lin et al. 2021 ; Namba and Kambara 2020 ). Furthermore, to counterbalance word stimuli, we separated the samples into two groups. In the first group, wayu ( ワユ ) was conditioned to negative words and nuyo ( ヌヨ ) to positive words. Conversely, in the second group, wayu ( ワユ ) was conditioned to positive words and nuyo ( ヌヨ ) to negative words. Similarly, sohi ( ソヒ ) and rehe ( レヘ ) were conditioned to neutral words. Additionally, we randomized the presentation order of stimuli on gorilla.sc between participants to control the effects of the presentation order of written pseudowords and spoken words (Francis and Ciocca 2003 ). In a future direction in a previous research, the authors emphasized the importance of controlling the presentation order of stimuli (Ando et al. 2021 ).
Analyses
We conducted a cumulative link mixed model (Christensen 2019 ) of two fixed effects of phases (before conditioning (phase 1); after conditioning (phase 2)) and conditions (a pseudoword conditioned to positive words; a pseudoword conditioned to neutral words; and a pseudoword conditioned to negative words), two random effects of participants and words (Baayen et al. 2008 ), and one dependent variable (evaluative responses to pseudowords) using “ordinal” (Christensen 2019 ) and “emmeans” packages (Lenth 2022 ) in R (R Core Team 2022 ). We also used “Rmisc” to calculate the means and 95% confidence intervals (Hope 2022 ), “FSA” to calculate medians and first and third quartiles (Q1 and Q3; Ogle et al. 2023 ), and “ggplot2” to make a figure of results (Wickham 2016 ) in R (Mangiafico 2015 ; Szuba et al. 2022 ). First, we employed the maximal random structure model for the model selection (Barr et al. 2013). We further simplified the maximal random structure model. Regarding the model selection, likelihood ratio tests were performed for the cumulative link mixed models using the anova () fuction on R (e.g., Baayen 2008 ; Winter 2020 ). Lastly, we applied random (varying) intercepts by participants and words and random (varying) slopes for conditions by participants as random effects (e.g., Baayen et al. 2008 ; Grasso et al. 2022 ; Winter 2020 ).
Results
In the cumulative link mixed model, we used raw evaluative responses of pseudowords ( nuyo , ヌヨ ; wayu , ワユ ) conditioned to positive or negative features and raw evaluative responses of two pseudowords conditioned to neutral words ( sohi , ソヒ ; rehe , レヘ ). The results showed a significant effect of phases and a significant interactions between phases and conditions (see Fig. 1 and Tables 1 and 2 ). The post-hoc tests showed that the evaluative responses of pseudowords conditioned to positive words were higher (more positive) after conditioning than before (Table 3 ). The evaluative responses of pseudowords conditioned to negative words were lower (more negative) after conditioning than before (Table 3 ). Additionally, the post-hoc tests showed that, after conditioning, evaluative responses of a pseudoword conditioned to positive and neutral words were higher (more positive) than those conditioned to negative words (Table 3 ). Table 1 shows medians and first and third quartiles (Q1 and Q3), whereas Fig. 1 shows the means and 95% confidence intervals.
Experiment 2
Participants
There were 38 participants (17 females; M age = 42.18; SD age = 9.19); all of them were native Japanese speakers. They were recruited from a crowdsourcing company in Japan (CrowdWorks, Inc.). The participants received a monetary reward of 220 Japanese yen, including the incidentals.
Experimental paradigm
Experiment 2 employed a 2 (before conditioning and after conditioning) × 3 (active, passive, and neutral) within-subjects factorial design. The dependent variables were evaluative responses to Japanese pseudowords (1: passive ; 7: active ), whereas the independent variables were time (1: before conditioning; 2: after conditioning) and conditions (1: conditioning to active words; 2: conditioning to passive words; 3: conditioning to neutral words).
Experimental devices
An IC recorder (ICD-SX1000) was used to prepare spoken Japanese real words. Additionally, we used an online platform for behavioral experiments (gorilla.sc) to record the participants’ responses.
Stimuli
We used four written pseudowords ( wayu , ワユ ; sohi , ソヒ ; nuyo , ヌヨ ; rehe , レヘ ) that were the same as those used in Experiment 1.
Additionally, we used 26 spoken Japanese active (e.g., fast) and passive words (e.g., lazy) based on the results of Survey 1 (see Appendix C). The results of Survey 1 showed that the mean activeness scores of the selected active words ranged from 3.20 to 4.79, whereas the mean activeness scores of selected passive words ranged from 1.29 to 2.85 (see Appendix C). We also used 26 spoken Japanese real words associated with neutral features (e.g., lectures; Higami et al. 2015 ), which were the same as those used in Experiment 1, except for two words (area and staff; see Appendix D). A female native Japanese speaker (the first author) uttered the spoken Japanese real words for recoding. We recorded spoken Japanese real words in a quiet room to reduce environmental sounds and noise.
Procedures
The methods of Experiment 2 were approximately the same as those used in Experiment 1. There were two procedural differences between Experiments 1 and 2. First, each written Japanese pseudoword ( nuyo or wayu ) was conditioned to 13 spoken Japanese active or passive words in Experiment 2, and to 14 spoken Japanese positive or negative words in Experiment 1. Second, in the first and second evaluation phases, participants in Experiment 2 evaluated emotional features to four written Japanese pseudowords consisting of wayu (ワユ), sohi ( ソヒ ), nuyo ( ヌヨ ), and rehe ( レヘ ) using a 7-point semantic differential scale from 1 ( passive ) to 7 ( active ), whereas participants in Experiment 1 evaluated them using a 7-point semantic differential scale from 1 ( negative ) to 7 ( positive ; Osgood et al. 1957 ; Staats and Staats 1957 ; Staats et al. 1959 ). Before the first and second evaluation phases in Experiment 2, we presented the following sentences: “Please rate how passive or active you feel about the katakana words displayed on the screen by pressing keys 1 ( passive ) to 7 ( active ) on the keyboard.” Additionally, we showed the following explanation before the conditioning phase in Experiment 2: “Please watch the katakana words that will appear on the screen carefully, and carefully listen to and repeat the Japanese words that will be auditorily presented. Press the space key to advance to the next task.”
Analyses
We employed a cumulative link mixed model (Christensen 2019 ) of two fixed effects of phases and conditions, two random effects of participants and words (Baayen et al. 2008 ), and one dependent variable (evaluative responses to pseudowords) using “ordinal” (Christensen 2019 ) and “emmeans” packages (Lenth 2022 ) on R (R Core Team 2022 ). We also used “Rmisc” to calculate means and 95% confidence intervals (Hope 2022 ), “FSA” to calculate medians and first and third quartiles (Q1 and Q3; Ogle et al. 2023 ), and “ggplot2” to create a figure of results (Wickham 2016 ) as R packages (Mangiafico 2015 ; Szuba et al. 2022 ). Regarding the fixed effects, phases included before conditioning (phase 1) as 1 and after conditioning (phase 2) as 2, whereas conditions included a pseudoword conditioned to active words as active, a pseudoword conditioned to neutral words as neutral, and a pseudoword conditioned to passive words as passive. Regarding the model selection, we first applied the maximal random structure model (Barr et al. 2013). We further simplified the maximal random structure model. Regarding the model selection, likelihood ratio tests for the cumulative link mixed models were performed using the anova () fuction on R (e.g., Baayen 2008 ; Winter 2020 ). Lastly, we applied random (varying) intercepts by participants and words and random (varying) slopes for phases and conditions by participants as random effects (e.g., Baayen et al. 2008 ; Grasso et al. 2022 ; Winter 2020 ).
Results
In the cumulative link mixed model, we used raw evaluative responses of pseudowords ( nuyo , ヌヨ ; wayu , ワユ ) conditioned to active or passive words and the raw evaluative responses of two pseudowords conditioned to neutral words ( sohi , ソヒ ; rehe , レヘ ). The results showed that the effects of time and interaction between time and conditions were statistically significant, while that of conditions was not (Fig. 2 and Tables 4 , 5 , and 6 ). The post-hoc tests showed that the evaluative responses of pseudowords conditioned to active or neutral words were higher (more active) after conditioning than before. Additionally, the post-hoc analysis showed that, after conditioning, evaluative responses of a pseudoword conditioned to active words were moderately higher (more active) than those conditioned to passive words (close to being significantly different, p < 0.0551). Table 4 shows the medians and first and third quartiles (Q1 and Q3), whereas Fig. 2 shows the means and 95% confidence intervals.
Discussion
Experiment 2 examined whether written Japanese pseudowords are conditioned to spoken Japanese active or passive words. Two findings emerged: first, there was no significant difference among evaluative responses of conditions (evaluative responses of pseudowords conditioned to active, passive, and neutral words) before conditioning; however, after conditioning, evaluative responses of pseudowords conditioned to active words were moderately higher (more active) than those conditioned to passive words (close to being significantly different, p < 0.0551). Second, evaluative responses of pseudowords conditioned to active or neutral words were higher (more active) after conditioning than before, while there was no significant difference between the evaluative responses of pseudowords conditioned to passive words before and after conditioning. The evaluative responses of pseudowords conditioned to neutral words also increased after conditioning. The results of Experiment 2 suggest that the evaluative responses (activeness ratings) of a pseudoword conditioned to active words increase in the evaluative conditioning.
The neutral words used in this experiment might include words (e.g., road, lecture, and traffic light) that participants perceive as active, although we selected neutral words from a previous study that only identified emotional valence (associated with positiveness ratings in this study) and arousal (associated with activeness ratings in this study) of Japanese words (Higami et al. 2015 ). The neutral words selected from a previous study were specified as neutral using a scale associated with emotional valence (Higami et al. 2015 ). Regarding the final step of this study, we conducted a post-hoc survey (Survey 2) to examine the positiveness (positive/ negative) and activeness (active/ passive) ratings of all the words used in Experiments 1 and 2.
Survey 2
Participants
There were 30 participants (11 females; M age = 43.33; SD age = 8.39), all of which were native Japanese speakers, recruited from a crowdsourcing company in Japan (CrowdWorks, Inc.). The participants received a monetary reward of 165 Japanese yen, including the incidentals.
Stimuli
The word stimuli was the only word used in Experiments 1 and 2 (see Appendix E, F, G, and H).
Procedures
Participants determined what the presented words refer to and responded to a Google Form on their personal computers. They performed two types of judgments. In the first section, they evaluated whether the presented words had negative or positive features using a 7-point semantic differential scale ranging from 1 ( negative ) to 7 ( positive ). In the second section, they determined whether the presented words had passive or active features using a 7-point semantic differential scale ranging from 1 ( passive ) to 7 ( active ). The presentation order of the words was randomized in each section.
Analyses
First, to check the positiveness of the words used in Experiment 1, we employed cumulative link mixed models including conditions (positive words, neutral words, and negative words) as fixed effects, participants and words as the random effects, and the ratings of positiveness as one dependent variable. Second, regarding the activeness of words in Experiment 1, we employed cumulative link mixed models including conditions (positive words, neutral words, and negative words) as fixed effects, participants and words as the random effects, and activeness ratings as one dependent variable. Third, regarding the positiveness of words in Experiment 2, we applied cumulative link mixed models including conditions (active words, neutral words, and passive words) as fixed effects, participants and words as the random effects, and positiveness ratings as one dependent variable. Fourth, to check the activeness of words used in Experiment 2, we employed cumulative link mixed models including conditions (active words, neutral words, and passive words) as fixed effects, participants and words as the random effects, and the ratings of activeness as one dependent variable. Regarding the cumulative link mixed models, we used “ordinal” (Christensen 2019 ) and “emmeans” packages (Lenth 2022 ) in R (R Core Team 2022 ). We also used “Rmisc” to calculate the means and 95% confidence intervals (Hope 2022 ), “FSA” to calculate medians and first and third quartiles (Q1 and Q3; Ogle et al. 2023 ), and “ggplot2” to create figures of results (Wickham 2016 ). First, we employed the maximal random structure model for the model selection (Barr et al. 2013). We further simplified the maximal random structure model. Regarding the model comparison, likelihood ratio tests were conducted for the cumulative link mixed models using the anova () fuction on R (e.g., Baayen 2008 ; Winter 2020 ). Lastly, we applied random (varying) intercepts by participants and words, and random (varying) slopes for conditions by participants as random effects in all the cumulative link mixed models (e.g., Baayen et al. 2008 ; Grasso et al. 2022 ; Winter 2020 ).
Results of Survey 2: positiveness of words used in Experiment 1
In the cumulative link mixed model, we used raw positiveness ratings of positive, neutral, and negative words in Survey 2. The results showed that the main effects of conditions were statistically significant (Fig. 3 and Tables 7 , 8 , and 9 ). The post-hoc tests showed that the positiveness ratings of positive words were higher (more positive) than those of neutral and negative words. Additionally, the post-hoc analysis showed that positiveness responses of neutral words were higher (more positive) than those of negative words. Table 7 shows the medians and first and third quartiles (Q1 and Q3), whereas Fig. 3 shows the means and 95% confidence intervals.
Results of Survey 2: activeness of words used in Experiment 1
In the cumulative link mixed model, we used raw activeness ratings of positive, neutral, and negative words in Survey 2. The results showed that the main effects of conditions were statistically significant (Fig. 4 and Tables 10 , 11 , and 12 ). The post-hoc tests showed that the activeness ratings of positive words were higher (more active) than those of neutral and negative words. Additionally, the post-hoc analysis showed that the activeness responses of neutral words were higher (more active) than those of negative words. Table 10 shows the medians and first and third quartiles (Q1 and Q3), whereas Fig. 4 shows the means and 95% confidence intervals.
Results of Survey 2: positiveness of words used in Experiment 2
In the cumulative link mixed model, we used raw positiveness ratings of active, neutral, and passive words in Survey 2. The results showed that the main effects of conditions were moderately significant (close to being significant difference, p < 0.0508); Fig. 5 and Tables 13 , 14 , and 15 ). The post-hoc tests, which were employed to check differences among conditions, showed that the positiveness ratings of active, neutral, and passive words differed insignificantly. Table 13 shows the medians and first, and third quartiles (Q1 and Q3), whereas Fig. 5 shows the means and 95% confidence intervals.
Results of Survey 2: activeness of words used in Experiment 2
We used raw activeness ratings of active, neutral, and passive words in Survey 2 in the cumulative link mixed model. The results showed that the main effects of conditions were significant (Fig. 6 and Tables 16 , 17 , and 18 ). The post-hoc tests showed that the activeness ratings of active words were higher (more active) than those of neutral and passive words. Additionally, the post-hoc analysis showed that activeness responses of neutral words were higher (more active) than those of passive words. Table 16 shows the medians and first, and third quartiles (Q1 and Q3), whereas Fig. 6 shows the means and 95% confidence intervals.
Discussion
In Survey 2, we examined the differences among positive and active responses of word conditions used in Experiments 1 and 2. Regarding the words used in Experiment 1, the results of Survey 2 showed that positiveness and activeness ratings of positive words were higher than those of neutral and negative words. Additionally, the positiveness and activeness ratings of neutral words were higher than those of negative words. Regarding the words used in Experiment 2, the results of Survey 2 showed that the activeness ratings of active words were higher than those of neutral and passive words. Additionally, the activeness ratings of neutral words were higher than those of passive words. However, the positiveness rating of words used in Experiment 2 differed insignificantly. The results of Survey 2 suggest that although the positiveness and activeness responses of words used in Experiment 1 and 2 were controlled, no significant difference among the positiveness responses of words might influence the evaluative conditioning in Experiment 2.
Regarding the evaluative conditioning, participants may interactively condition the positiveness and activeness ratings of words to those of pseudowords. A previous study showed that the positiveness (associated with valence in the previous study) and activeness ratings (associated with arousal in the previous study) were positively correlated in the effectively evaluative conditioning (Gawronski and Mitchell 2014). These results are consistent with our findings of Experiment 1 and Survey 2. Gawronski and Mitchell also showed that evaluative conditioning is more effective for high active UCS (associated with high arousal UCS in the previous study) than low active UCS (associated with high arousal UCS in the previous study; Gawronski and Mitchell 2014). Additionally, the previous study indicated that the activeness (arousal) ratings of stimuli may influence the memory of CS and UCS pairs (Gawronski and Mitchell 2014). However, Survey 2 showed that whereas the positiveness ratings of the words used in Experiment 2 differed insignificantly, only the activeness ratings of active words were higher than those of neutral and passive words. The results of Experiment 2 showed that participants conditioned the activeness ratings of active words to those of pseudowords, while they did not condition those of passive words to those of pseudowords. In sum, this finding suggests that when participants condition activeness (arousal) ratings of passive (low arousal) words to those of pseudowords, the positiveness (valence) ratings of words would be important in the evaluative conditioning.
General discussion
In this study, we conducted two behavioral experiments after selecting the stimuli in a survey study. The findings of Experiment 1 suggest that evaluative responses of written Japanese pseudowords could be conditioned to spoken Japanese positive or negative words, while the conditioning might not be affected by the mere-exposure effect or familiarity effect of verbal information (Ando et al. 2021 ; Citron et al. 2014 ; Monahan et al. 2000 ; Zajonc 1968 , 2001 ). Similarly, the results of Experiment 2 suggest that the evaluative responses of written Japanese pseudowords could also be conditioned to spoken Japanese active words. In future, we propose the following methods: first, we selected the neutral words used as a control condition when conditioning evaluative responses of pseudowords to active and passive words. However, these words might not be neutral for the participants in Experiment 2 because the neutral words selected from a previous study were specified as neutral using a scale associated with emotional valence (Higami et al. 2015 ). Regarding checking the positiveness and activeness ratings of the words used in Experiment 2, the results of the post-hoc survey (Survey 2) suggest that neutral words functioned as neutral words, whereas the positiveness ratings of active, passive, and neutral words used in Experiment 2 differed insignificantly. Therefore, the results of Survey 2 suggest that when participants condition the activeness ratings of passive words to those of pseudowords, the positiveness ratings of words would be essential to effective evaluative conditioning. Second, future studies can replicate this study, which measured evaluative responses before and after conditioning using English stimuli and asking native English speakers since Staats and Staats ( 1957 ) and Staats et al. ( 1959 ) only focused on the evaluative responses after conditioning. Third, although the current and previous studies used spoken real words as UCS and written pseudowords as CS, researchers can use them conversely. If future studies also use written real words as UCS and spoken pseudowords as CS in their experiments, researchers can examine the differential effects of the stimulus modalities in the verbally evaluative conditioning. Finally, since conditioning evaluative responses of written pseudowords to positive and active words were robust for participants, there is scope to examine the clinical application of this approach for patients in future. If the vocabulary of patients can be conditioned to positive or active words, the evaluative responses of these words stored in their vocabulary may increase.
In this study, we improved some experimental methods of previous studies, which conducted verbally evaluative conditioning (Passalli et al. 2022 ; Staats and Staats 1957 ; Staats et al. 1959 ; Vidal et al. 2021 ). First, although they could not examine differences between subjective evaluations of pseudowords before and after conditioning subjective evaluations (meanings) of spoken real words to those of pseudowords (Staats and Staats 1957 ), we assessed the differences between them. Second, since few studies had examined whether native speakers of non-alphabetic languages can condition the subjective evaluations (meanings) of the non-alphabetic real words to those of the non-alphabetic pseudowords, we examined whether Japanese native speakers can condition subjective evaluations (meanings) of Japanese real words to those of Japanese pseudowords. This is the first study to show the effects of verbal evaluative conditioning in Japanese.
We use the experimental procedures (the conditioning phase and evaluation phase after the conditioning phase) of Staats and colleagues’ research (e.g., Staats and Staats 1957 ; Staats et al. 1959 ). In the conditioning phase, participants explicitly repeated the spoken CS by speech production. For their speech productions, the participants listened to, kept, and repeated each spoken stimulus. This procedure would be associated with working memory processes for the maintenance and scanning of verbal stimuli (e.g., Kambara et al. 2017 , 2018 ). The performance of a working memory task would be correlated with the performance of the associative learning for linguistic and referential features (e.g., Horinouchi et al. under review). If speech production was not included in this study, participants may have not paid attention to the spoken CS. If so, the effects of the verbally evaluative conditioning may be weak. To check the attentional and learning effects in verbally evaluative conditioning, a related study used a memory task (e.g., Vidal et al. 2021 ). Future studies can examine the relationships between the effects and procedural differences in verbally evaluative conditioning.
Future studies should consider the following points based on our study’s limitations: first, future research needs to consider the selection of pseudowords. The four pseudowords used in Experiments 1 and 2 were selected from a Japanese psycholinguistic study (Umemoto et al. 1955 ), despite the associative and meaningful values of these pseudowords being extremely low in the previous study. Therefore, all the emotional dimensions should be checked before future studies because the values of these pseudowords may change in future studies in light of increased proficiency of second languages, neologisms, and borrowings among others. For example, a Japanese person may think that a pseudoword ( nuyo : ヌヨ ) is orthographically similar to another word ( mayo : マヨ ), which means mayonnaise in Japanese.
Second, future studies should use the same scales in the survey and experiments. In our preliminary survey (Survey 1), we used five-point scales. In the two experiments and post-hoc survey (Survey 2), we used seven-point scales. The number of points in these scales may affect the evaluations in studies (e.g., Dawes 2008 ).
Third, future studies should check the valence and arousal values of each word (UCS) and pseudoword (CS) if those studies use the current paradigm. In this study, we did not collect both valence and arousal evaluations of each word and pseudoword before the experiments. We can check not only the participants’ ratings of pseudowords but also those of the real words in the first evaluation phase. Accordingly, we can use those ratings to create custom positive, neutral, and negative words for each participant (e.g., a person would think that “lecture” is a very negative word). Moreover, the neutral words should be determined based on similar samples and scales (positive–negative, active–passive, and strong–weak scales). Hence, researchers need to ensure that the Japanese psycholinguistic database includes the valence and arousal evaluations of many words.
Fourth, to the best of our knowledge, although previous studies have investigated the longitudinal effects of associative learning for linguistic and nonlinguistic features (e.g., Havas et al. 2018 ; Kambara et al. 2013 ; Lee et al. 2003 ; Takashima et al. 2017 ), no study has examined the longitudinal effects of verbally evaluative conditioning. Some studies longitudinally showed the evaluative conditioning effects of pictures days after the evaluative conditioning (e.g., Förderer and Unkelbach 2013 ; Waroquier et al. 2020 ). A related study suggested that evaluative conditioning effects were not affected by sleep connected to memory consolidation (Richter et al. 2021 ). Because the evaluative conditioning effects of pictures remained after the conditioning day, the effects of verbally evaluative conditioning would also be maintained.
Fifth, we only specified the participants as native Japanese speakers when we recruited participants in this study. We did not collect the linguistic experience of the participants (e.g., their experiences of L2 learning). In Japan, most people generally learn English from their elementary or junior high school through university. Because the experiences of L2 learning may influence verbally evaluative conditioning, future studies may consider the experiences of language learning. A previous study showed that the effects of verbally evaluative conditioning in L1 were greater than those in L2 (Vidal et al. 2021 ).
Lastly, researchers should consider CS, UCS, and other experimental methods in evaluative conditioning. In a review article, De Houwer et al. ( 2001 ) reported that evaluative conditioning is robust and ubiquitous. However, some failures of evaluative conditioning occur owing to the methodological variety, stimuli, number of stimuli, order of stimulus presentation, and experimental design. In evaluative conditioning, the identification of CS would be an important factor (Stahl and Bading 2020 ; Stahl et al. 2016 ). Moreover, foveal presentation of CS (photographs) would also be more effective than parafoveal presentation of CS in the evaluative conditioning of photographs as CS and emotional faces as UCS (Dedonder et al. 2014 ). The effects of evaluative conditioning are higher for simultaneous pairings than for sequential pairings, because attention and memory affect the evaluative conditioning (Stahl and Heycke 2016 ). Heycke and colleagues reported that the effects of evaluative conditioning were cross-modally greater for visual CSs with 1000 ms than those with fast presentation when auditory UCSs were used (Heycke et al. 2017 ). The evaluative conditioning of affective pictures (UCS) and verbal stimuli (CS) supraliminally appeared in artificial grammers (Jurchiş et al. 2020 ) and an unfamiliar language (Amd 2022 ). Additionally, Kuchinke et al. ( 2015 ) reported that early modulations of event-related potentials occur during the recognition of pseudowords conditioned to emotional pictures. In consideration of the methods, researchers also need to consider samples for experiment. Regarding the development perspectives of the evaluative conditioning, the evaluative conditioning has been shown for preschool- and school-aged children (Field 2006 ; Halbeisen et al. 2017 , 2021 ). However, a related study showed no effect of the evaluative conditioning including both of the verbal stimuli and nonverbal stimuli for school-aged children (Charlesworth et al. 2020 ). These previous findings imply that nonverbal stimuli might be more effective than verbal stimuli in the evaluative conditioning, especially for early aged children because the early aged children would have lower repertories of words (associative pairs of linguistic and nonlinguistic information) in their mental lexicon than adults.
Regarding the implications of this study in the real world, verbally evaluative conditioning can be applied as a psycholinguistic therapy. For example, if a person has fearful or negative emotions associated with a word (e.g., a person name, a specific word like school), verbally evaluative conditioning can be useful to improve the evaluation of the word. Future studies can employ this perspective in applied studies.
In conclusion, this study identifies the verbally evaluative conditioning of Japanese words connected to negative, positive, and active emotions for Japanese native speakers. We improved the methods used in previous studies. We also discuss the future directions and limitations of our study for future research. | Appendix A Words in Survey 1
Category means word categories used in this study. Positive: positive words; Negative: negative words; Active: active words; Passive: passive words; Strong: strong words; Weak: weak words. We translated English words of previous studies (Staats and Staats 1957 ; Staats et al. 1959 ) into Japanese words. There are differences between the part of speech (word class) of some words used in the previous and current studies. * indicates that the word was used as an adjective in Staats and Staats ( 1957 ). + indicates that the word was used as a noun in Staats and Staats ( 1957 ). Additionally, “sacredness,” “excited,” and “beauty” were used in Survey 1, whereas “sacred,” “excitement,” and “beautiful” were used in Experiment 1 or 2, and Survey 2. Staats and Staats ( 1957 ) also used “sacred,” “excited,” and “beauty.” This table shows the words as “sacredness,” “excitement,” and “beautiful.” ** indicates that the word was not used in Staats and Staats ( 1957 ) to measure whether the word was strong or weak. The word frequency of fast / quick shows the sum of the word frequency of fast and quick. We counted the word moras. The word frequencies were found in Amano and Kondo ( 2000 ). Because sixteen word frequencies could not be found from the database, we showed that “–” means no value in Appendix A.
Appendix B Means and standard deviations of evaluative responses of words selected for Experiment 1
M : Mean; SD : Standard Deviation; Regarding evaluations of positive and negative words, participants evaluated whether each presented word is positive or negative by using a 5-point semantic differential scale in Survey 1 (1: negative ; 5: positive ; Osgood and Suci 1955 ; Osgood et al. 1957 ). The means and standard deviations showed evaluative responses of negative and positive words in Survey 1. The selected positive and negative words were used in Experiment 1. These Japanese words were translated from English words in Staats and Staats ( 1957 ) by authors who are native speakers of Japanese. There are differences between the part of speech (word class) of some words used in the previous and current studies. * indicates that the word was used as an adjective in Staats and Staats ( 1957 ). + indicates that the word was used as a noun in Staats and Staats ( 1957 ). Additionally, “sacredness” and “beauty” were used in Survey 1, whereas “sacred” and “beautiful” were used in Experiment 1 and Survey 2. Staats and Staats ( 1957 ) also used “sacred” and “beauty.” This table shows the words as “sacredness” and “beautiful.” These words were auditorily presented to each participant in Experiment 1.
Appendix C Means and standard deviations of evaluative responses of words selected for Experiment 2
M : Mean; SD : Standard Deviation; Regarding the evaluations of active and passive words, participants evaluated whether each presented word is active or passive by using a 5-point semantic differential scale in Survey 1 (1: passive ; 5: active ; Osgood and Suci 1955 ; Osgood et al. 1957 ). The means and standard deviations showed evaluative responses of active and passive words in Survey 1. The selected active and passive words were used in Experiment 2. These Japanese words were translated from English words in Staats and Staats ( 1957 ) by authors who are native speakers of Japanese. There are differences between the part of speech (word class) of some words used in the previous and current studies. *indicates that the word was used as an adjective in Staats and Staats ( 1957 ). + indicates that the word was used as a noun in Staats and Staats ( 1957 ). Additionally, “excited” was used in Survey 1, whereas “excitement” was used in Experiment 2 and Survey 2. Staats and Staats ( 1957 ) also used “excited.” This table shows the word as “excitement.” These words were auditorily presented to each participant in Experiment 2.
Appendix D Japanese characters, pronunciations, and English meanings of neutral words used in Experiment 1 and 2
These words were selected from a previous study that examined emotional valence and arousal of two-character kanji (Higami et al. 2015 ). Wordlists of neutral words in Experiment 1 are approximately same with those in Experiment 2, except two words (area and staff). In Experiment 1, participants evaluated whether each presented word is positive or negative by using a 7-point semantic differential scale (1: negative ; 7: positive ; Osgood and Suci 1955 ; Osgood et al. 1957 ). In Experiment 2, participants evaluated whether each presented word is active or passive by using a 7-point semantic differential scale (1: passive ; 7: active ; Osgood and Suci 1955 ; Osgood et al. 1957 ). Authors originally translated Japanese words to English meanings for this table. These words were auditorily presented to each participant in Experiments 1 and 2.
Appendix E Results of Survey 2 to check the positiveness and activeness ratings of positive and negative words used in Experiment 1
M : Mean; SD : Standard Deviation; First, participants evaluated whether each presented word is positive or negative by using a 7-point semantic differential scale in Survey 2 (1: negative ; 7: positive ; Osgood and Suci 1955 ; Osgood et al. 1957 ). Second, participants evaluated whether each presented word is active or passive by using a 7-point semantic differential scale in Survey 2 (1: passive ; 7: active ; Osgood and Suci 1955 ; Osgood et al. 1957 ). The means and standard deviations showed evaluative responses of positive and negative words in Survey 2. The selected positive and negative words were used in Experiment 1. These Japanese words were translated from English words in Staats and Staats ( 1957 ) by authors who are native speakers of Japanese. There are differences between the part of speech (word class) of some words used in the previous and current studies. * indicates that the word was used as an adjective in Staats and Staats ( 1957 ). + indicates that the word was used as a noun in Staats and Staats ( 1957 ). Additionally, “sacredness” and “beauty” were used in Survey 1, whereas “sacred” and “beautiful” were used in Experiment 1 and Survey 2. Staats and Staats ( 1957 ) also used “sacred” and “beauty.” This table shows the words “sacred” and “beautiful.” These words were auditorily presented to each participant in Experiment 1.
Appendix F Results of Survey 2 to check the positiveness and activeness ratings of neutral words used in Experiment 1
These words were selected from a previous study that examined emotional valence and arousal of two-characters kanji (Higami et al. 2015 ). Wordlists of neutral words in Experiment 1 are approximately same with those in Experiment 2, except two words (area and staff). First, participants evaluated whether each presented word is positive or negative using a 7-point semantic differential scale in Survey 2 (1: negative ; 7: positive ; Osgood and Suci 1955 ; Osgood et al. 1957 ). Second, participants evaluated whether each presented word is active or passive using a 7-point semantic differential scale in Survey 2 (1: passive ; 7: active; Osgood and Suci 1955 ; Osgood et al. 1957 ). The means and standard deviations showed evaluative responses of neutral words in Survey 2. We originally translated Japanese words to English meanings for this table. These words were auditorily presented to each participant in Experiment 1.
Appendix G Results of Survey 2 to check the positiveness and activeness ratings of active and passive words used in Experiment 2
M : Mean; SD : Standard Deviation; First, participants evaluated whether each presented word is positive or negative by using a 7-point semantic differential scale in Survey 2 (1: negative ; 7: positive ; Osgood and Suci 1955 ; Osgood et al. 1957 ). Second, participants evaluated whether each presented word is active or passive by using a 7-point semantic differential scale in Survey 2 (1: passive ; 7: active ; Osgood and Suci 1955 ; Osgood et al. 1957 ). The means and standard deviations showed evaluative responses of active and passive words in Survey 2. The selected active and passive words were used in Experiment 2. These Japanese words were translated from English words in Staats and Staats ( 1957 ) by authors who are Japanese native speakers. The part of speech (word class) of some words used in the previous and current studies differs. * indicates that the word was used as an adjective in Staats and Staats ( 1957 ). + indicates that the word was used as a noun in Staats and Staats ( 1957 ). Additionally, “excited” was used in Survey 1, whereas “excitement” was used in Experiment 2 and Survey 2. Staats and Staats ( 1957 ) also used “excited.” This table presents the word as “excitement.” These words were auditorily presented to each participant in Experiment 2.
Appendix H Results of Survey 2 to check positiveness and activeness ratings of neutral words used in Experiment 2
These words were selected from a previous study that examined emotional valence and arousal of two-characters kanji (Higami et al. 2015 ). Wordlists of neutral words in Experiment 2 are approximately same with those in Experiment 1, except two words (area and staff). First, participants evaluated whether each presented word is positive or negative using a 7-point semantic differential scale in Survey 2 (1: negative ; 7: positive ; Osgood and Suci 1955 ; Osgood et al. 1957 ). Second, participants evaluated whether each presented word is active or passive using a 7-point semantic differential scale in Survey 2 (1: passive ; 7: active ; Osgood and Suci 1955 ; Osgood et al. 1957 ). The means and standard deviations showed evaluative responses of neutral words in Survey 2. We originally translated Japanese words to English meanings for this table. These words were auditorily presented to each participant in Experiment 2.
Acknowledgements
We thank the reviewers and editors who provided insightful suggestions and careful reviews to improve this article. Additionally, we thank Yutao Yang, Yan Yan, Ukwueze Obinna, Zihan Lin, Nan Wang, Maika Hayashi, Yoshiki Oe, Ayumu Kodama, Ryuya Matsunaga, and other faculties and students who provided important suggestions and support in the Department of Psychology at Hiroshima University.
Author contributions
MA and TK helped in conceptualization, methodology, formal analyses, resources, writing—original draft preparation, writing—review and editing, and visualization. MA; validation, software, investigation, and data curation. TK contributed to supervision, project administration, and funding acquisition. All authors have read and agreed to the published version of the manuscript.
Funding
The corresponding author (T.K.) was supported by a KAKENHI Grant-in-Aid for Early-Career Scientists, KAKENHI Grant-in-Aid for Scientific Research (B), KAKENHI Grant-in-Aid for Scientific Research (C), Research Grant of Urakami Foundation for Food and Food Culture Promotion, and Research Grant of the Murata Science Foundation. Additionally, this research was conducted as part of the School of Education Joint Research Project 2020, 2021, and 2022 at Hiroshima University, and received research support from the School of Education.
Data availability
The analyzed data and spoken stimuli are available on Open Science Framework ( https://osf.io/re2s4/?view_only=8543196d6e9c4aa8a1e2b4f2ed7ceed9 ).
Declarations
Conflict of interest
The authors declare that there is no conflict of interest. | CC BY | no | 2024-01-15 23:41:53 | Cogn Process. 2023 Jul 14; 24(3):387-413 | oa_package/3d/7f/PMC10787689.tar.gz |
|
PMC10787690 | 38189953 | Introduction
Apoptosis, one type of programmed cell death (PCD), represents one of the highly conserved cellular eukaryotic suicide programs triggered by extrinsic or intrinsic cellular stimuli. It has been described and investigated in detail in mammalian cells (Hamann et al. 2008 ; Rico-Ramírez et al. 2022 ). In fungi, apoptosis is called apoptotic-like PCD because their manifestation of PCD differs from that of mammals (Shlezinger et al. 2012 ; Hardwick 2018 ). Apoptotic-like PCD is involved in various biological processes such as stress adaptation, development, aging, and host-pathogen interactions in fungi (Hamann et al. 2008 ; Häcker 2018 ; Gonçalves et al. 2020 ). During fungal-fungal antagonistic interaction, compounds secreted by a competitor may trigger apoptotic-like PCD in a fungus, providing a remarkable selection advantage for nutrients (Shlezinger et al. 2012 ). At the same time, apoptotic removal of damaged cells could also contribute to a fitter, better-adapted population in the long term (Hamann et al. 2008 ; Saladi et al. 2020 ). Thus, apoptotic-like PCD is an important strategy for fungi to gain advantage in antagonistic fungal competition. However, the mechanisms driving apoptotic-like PCD during fungal antagonistic interactions remain unclear.
Apoptosis-inducing factor (AIF) is a conserved flavoprotein among eukaryotic kingdoms (Novo et al. 2021 ). It is a caspase-independent apoptosis effector located in the mitochondrial intermembrane space (Miramar et al. 2001 ; Elguindy and Nakamaru-Ogiso 2015 ). Upon proteolytic induction, AIF translocates from mitochondria to the nucleus, leading to several hallmarks of apoptosis, such as chromatin condensation and DNA degradation (Delavallée et al. 2011 ; Cho et al. 2018 ). The cytoplasmic AIF-homologous mitochondrion-associated inducer of death (AMID) regulates apoptotic-like PCD in a similar way. Many fungal AIF or AMID homologs have been characterized. For example, in an AIF ( Ynr074cp ) knockout strain of baker’s yeast Saccharomyces cerevisiae , H 2 O 2 - and acetic acid–induced apoptotic-like PCD is significantly attenuated (Wissing et al. 2004 ). Aif1 is also required for apoptotic-like PCD in the basidiomycetous yeast Cryptococcus neoformans . Its deletion promotes chromosome aneuploidy and fluconazole resistance (Semighini et al. 2011 ). In contrast, AIF1 from the ascomycete yeast Candida albicans plays a dual role in regulating cell death under different concentrations of stress-causing agents. AIF1 deletion leads to attenuated apoptotic-like PCD under 2 mM H 2 O 2 or 20 mM acetic acid but results in reversed sensitivity when treated with more severe stresses (Ma et al. 2016 ). In contrast to these unicellular yeast species, filamentous fungal species typically have several AIF or AMID paralogs. According to reports in Podospora anserina (Brust et al. 2010 ), Neurospora crassa (Carneiro et al. 2012 ), and Aspergillus nidulans (Savoldi et al. 2008 ; Dinamarco et al. 2010 ), at least some of them play a role in apoptotic-like PCD during stress. Accordingly, AIF or AMID-related PCD is speculated to be universal and critical among all fungi. No AIF or AMID homologs have been identified and characterized in multicellular basidiomycetes. Cytological studies reported that PCD in basidiomycetes is related to heteroincompatibility reactions of mycelia of Helicobasidium mompa and Rosellinia necatrix (Inoue et al. 2011a , 2011b ), mycelial secondary metabolite (ganoderic acid) production in Ganoderma lucidum (Zhu et al. 2019 ), mycelial aging in Lentinula edodes (Gao et al. 2019 ), targeted tissue degradation in fruiting body development for shaping the mushrooms in Coprinopsis cinerea and Agaricus bisporus (Lu and Sakaguchi 1991 ; Umar and Van Griensven 1997 ), checkpoints in the progression of meiosis in C. cinerea (Celerin et al. 2000 ; Lu et al. 2003 ; Sugawara et al. 2009 ), and its assorciation with heat stress in Pleurotus species (Song et al. 2014 ).
AIF and AMID share FAD-binding motifs in their N-termini. Independent of their apoptogenic function, they also possess NAD(P)H oxidoreductase activities capable of generating superoxide radicals (Urbano et al. 2005 ; Joza et al. 2009 ; Elguindy and Nakamaru-Ogiso 2015 ; Herrmann and Riemer 2021 ). Under normal conditions, AIF and AMID participate in respiratory complex I assembly, play essential roles in oxidative phosphorylation and redox control, and contribute to regulating reactive oxygen species (ROS) (Joza et al. 2009 ). ROS, comprising both free radical oxygen intermediates and non-free ones, including ∙O 2 , OH − , and H 2 O 2 , possess many physiological functions in fungi, including signal transduction, interspecific interactions, and secondary metabolite synthesis (Miranda et al. 2013 ; Breitenbach et al. 2015 ; Holze et al. 2018 ; Liu et al. 2022 ). Research has shown that enhanced ROS formation is a prerequisite for AIF to be carbonylated, proteolytically cleaved, and released from mitochondria (Norberg et al. 2010 ; Su et al. 2021 ). In S. cerevisiae (Li et al. 2006 ) and C. albicans (Ma et al. 2016 ), AIF or AMID deficiency leads to decreased ROS production under low levels of oxidative stress but it results in higher ROS levels when exposed to higher concentrations of H 2 O 2 , indicating a complicated link between ROS and AIF/AMID.
Previously, we reported that intracellular ROS acted as signal molecules to stimulate defense responses in C. cinerea by expressing various detoxification proteins, including laccase Lcc9, during interaction with the mucoromycete Gongronella sp. w5 (Liu et al. 2022 ). In this study, based on gene silencing and overexpression analysis, we describe an AIF homolog Cc AIF1 in C. cinerea monokaryon Okayama 7 as a regulator of ROS to promote Lcc9 expression. Ccaif1 induced apoptotic-like PCD in C. cinerea cells grown in cocultures with Gongronella sp. w5, but Ccaif1 silencing disrupted this process and slowed C. cinerea mycelial growth and asexual sporulation, as well as Lcc9 expression. Thus, AIF-related PCD is an effective defense mechanism for C. cinerea in fungal-fungal interactions. Furthermore, based on the mechanisms we elucidated, a new strategy for the highest enzyme yields was established by combining Cc AIF1 overexpression and H 2 O 2 stimulation to trigger C. cinerea laccase production in an axenic culture. | Materials and methods
Fungi and culture media
C. cinerea Okayama 7 (#130; A43 , B43 , ade8 ) (ATCC No. MYA-4618TM) and Gongronella sp. w5 (China Center for Type Culture Collection No. AF2012004) were maintained on YMG agar (yeast malt glucose; per liter, 4 g yeast extract, 10 g malt extract, 4 g glucose, and 15 g agar) or PDA (potato dextrose agar; per liter, filtrate of 200 g boiled- potato, 20 g glucose, and 15 g agar) plates at 4 °C according to Pan et al. ( 2014 ). All the fungal cultivation experiments were conducted at 37 °C.
Axenic culture and separated coculture
To maintain normal growth, axenic cultivation of C. cinerea in liquid FAHX medium (Fructose DL-asparagine HX; per liter, 15.0 g fructose, 1.5 g DL-asparagine, 1.0 g KH 2 PO 4 , 0.5 g MgSO 4 ·7H 2 O, 0.1 g Na 2 HPO 4 ·5H 2 O, 10.0 g CaCl 2 , 1.0 mg FeSO 4 ·7H 2 O, 28.0 mg adenine, 2.0 mg CuSO 4 ·5H 2 O, and 50.0 μg vitamin B 1 ) and separated coculture of C. cinerea (separated in coculture in dialysis tubes) and Gongronella sp. w5 using SAHX medium (Sucrose DL-asparagine HX; in which sucrose (7.5 g/L) substituted for fructose of FAHX) were performed according to Hu et al. ( 2019 ) and Liu et al. ( 2022 ). In all experiments, the time 0 h of cocultivation refers to the start of coculture when homogenized Gongronella sp. w5 mycelium was added into the free medium of a 36-h-old culture of C. cinerea pregrown in a dialysis tube.
Sequence and phylogenetic analysis of Cc AIF1 and Cc AIF2
The sequence similarity search of Cc AIF1 and Cc AIF2 was performed using NCBI BLASTP software ( http://blast.ncbi.nlm.nih.gov/Blast.cgi ). Multiple sequence alignment of AIF or AMID with homologous sequences from other species was performed using Clustal X 2.0 and Phylogeny fr3 ( http://www.phylogeny.fr/index.cgi ). The phylogenetic tree was constructed using MEGA 7 based on the neighbor-joining method (Kumar et al. 2016 ).
Gene function assays in yeast
The coding region of Cc AIF1 was introduced into a Hin d III/ Bam H I-digested yeast expression vector pYES2CT (presented by Professor Fan Yang in Dalian Polytechnic University) under the control of the yeast GAL1 promoter. The resultant recombined vector and pYES2CT were used to transform the strain S. cerevisiae Y1HGold by the lithium acetate method to give a Y1H- Cc AIF1 strain and a Y1H-vector strain, respectively. The two transformants were then spotted on a synthetic drop-out SD-glucose plate or an SD-galactose plate and incubated at 30 °C for 3 days.
For DAPI (4,6-diamidino-2-phenylindole) staining of yeast nuclei, cells cultured for 36 h were collected, resuspended in 100% (v/v) methanol for brief fixation and permeabilization, and then stained for 15 min in the dark according to the manufacturer’s instructions (Beyotime Biotech, Shanghai, China). Cell images were taken using a laser confocal microscope (Olympus, Tokyo, Japan) at 364 nm excitation and 454 nm emission wavelengths.
For the cellular localization assay, a Y1H-GFP- Cc AIF1 strain was obtained by first fusing the coding region of gfp in frame to the 5′-end of gene Ccaif1 (Elguindy and Nakamaru-Ogiso 2015 ; Ma et al. 2016 ), cloning the fragment into pYES2CT and transforming the construct into yeast as described above. Cells grown for 3 d on SD-galactose medium were incubated with 20 ng/mL Mitotracker Red CMXRos (Beyotime Biotech, Shanghai, China) for 15 min, washed twice with PBS (pH 7.4), and images were taken using a laser confocal microscope at 488 nm and 594 nm emission wavelengths, respectively (Akgul et al. 2000 ).
To verify that Ccaif1 expression in yeast caused DNA fragmentation, TUNEL (TdT-mediated dUTP nick-end labeling) staining and the comet assay were used. The cells were grown on SD-glucose and SD-galactose medium for 3 days, respectively. For TUNEL staining, the collected cells were rinsed with PBS and fixed with 4% formaldehyde for 30 min. Then, the cells were resuspensed in PBS (pH 7.4) buffer containing 0.3% Triton X-100 and incubated at room temperature for 5 min. After washing with PBS twice, the cells were incubated with the TUNEL detection buffer (Beyotime Biotech, Shanghai, China) at 37 °C away from light for 60 min. The images were taken using a laser confocal microscope at 594 nm emission wavelengths. For the comet assay, normal melting point agarose was coated on the pretreated slides first. Then, the cell suspension was mixed with low melting point agarose at 37 °C and coated on the slides containing normal melting point agarose. The slides were treated with lysis solution (Beyotime Biotech, Shanghai, China) for 60 min at 4 °C, followed by gel electrophoresis. Subsequently, the slides were neutralized, stained with PI solution at room temperature, and observed under a fluorescence microscope at 594 nm emission wavelengths.
Construction of Ccaif1 silencing, Ccaif1 overexpression, and gfp - Ccaif1 overexpression C. cinerea strains
Total RNA was extracted from C. cinerea using the RNAiso Plus extraction reagent (TaKaRa, Dalian, China) according to the manufacturer’s protocol, followed by RNase-free DNase digestion (Promega, Beijing, China). One microgram of total RNA was used as the template for cDNA synthesis using a PrimeScript RT reagent kit (TaKaRa, Dalian, China). A Ccaif1 antisense silencing fragment comprising the gene sequence of 930 to 1146 bp and the full-length cDNA of Ccaif1 were amplified using primers listed in Table S1 and inserted behind the A. bisporus gpdII promoter into plasmid pYSK7 (Kilaru et al. 2006b ) through homologous recombination in Y1HGold, respectively, as described by Liu et al. ( 2022 ). The gfp - Ccaif1 overexpression vector was constructed by fusing egfp to the full-length cDNA of Cc AIF1 at the N-terminus to maintain its right localization (Elguindy and Nakamaru-Ogiso 2015 ; Ma et al. 2016 ). The resultant Ccaif1 silencing plasmid pYSK-si Ccaif1 , Ccaif1 overexpression plasmid pYSK-ov Ccaif1 , or gfp - Ccaif1 overexpression plasmid pYSK-ov gfp - Ccaif1 was cotransformed with the selection vector p Cc Ade8 into C. cinerea protoplasts according to Dörnte and Kües ( 2012 ). Transformants were selected on regeneration medium without adenine and further verified based on PCR amplification of the antisense Ccaif1 or the full-length cDNA of Ccaif1 using primers PF and PR (Table S1 ) (Dörnte and Kües 2012 ; Liu et al. 2022 ).
RNA extraction and quantitative reverse transcription PCR (qRT-PCR) analysis
C. cinerea wild type and transformant mycelia from axenic cultures or separated cocultures at 0, 12, 24, 36, 48, 60, 72, 84, and 96 h incubation were employed for total RNA extraction. qRT-PCR was performed to analyze the transcript levels of Ccaif1 and lcc9 using a SYBR Green kit (TaKaRa, Dalian, China) on a Roche LightCycler 96 Real-Time PCR System (Roche, Basel, Switzerland). The relative expression levels were calculated using the 2 −∆∆CT method (Livak and Schmittgen 2001 ). The β-actin gene was chosen as the reference gene throughout the study (Liu et al. 2022 ). Gene expression analysis was performed for all clones on three parallel cultures each, with measurements in triplicate for each individual biological sample.
Apoptotic-like PCD assays and localization detection of Cc AIF1 in C. cinerea
Apoptotic-like PCD was measured under dark conditions using an Annexin V-PI apoptosis detection kit (Keygen Biotech, Nan Jing, China) in C. cinerea wild type, Ccaif1 silencing, and Ccaif1 overexpressing cells. Briefly, the C. cinerea mycelia were first washed with PBS, then suspended in a 500-μL Annexin-V binding buffer to which 5 μL Annexin V-EGFP and 5 μL PI were added, and for 20 min incubated at room temperature. Finally, the fluorescence intensity of the samples was detected using a laser confocal microscope (Olympus, Tokyo, Japan) at 488 nm and 594 nm wavelengths.
Nuclear DNA fragmentation of the C. cinerea cells was assessed by DAPI staining. C. cinerea and Gongronella sp. w5 were inoculated on opposite sides of microscope slides with SAHX solid medium and grown for 4 days until the mycelia of two strains touched each other. The C. cinerea mycelia were fixed in 100% (v/v) methanol at room temperature for 5 min, washed with PBS (pH 7.4), stained with 2 μg/mL DAPI (Beyotime Biotech, Shanghai, China) for 3 min in the dark, and examined with a laser confocal microscope.
For the localization assay in C. cinerea , the GFP- Cc AIF1 overexpression C. cinerea transformant and Gongronella sp. w5 were cocultured on microscope slides with SAHX agar medium. The hyphae were stained with 20 ng/mL Mitotracker Red CMXRos for 15 min and photographed.
Mycelial growth on agar plates, spore counting, and sensitivity to chemicals
C. cinerea wild type and Ccaif1 silencing strains were inoculated with mycelial plugs (5 mm in diameter) onto solid SAHX (for coculture) or FAHX (for axenic culture) medium in the middle of the plates and incubated at 37 °C (Hu et al. 2019 ; Liu et al. 2022 ). In coculture, the initial distance between the inocula of C. cinerea and Gongronella sp. w5 on plates was about 5 cm. All C. cinerea clones were tested in axenic culture and in coculture on three parallel plates. Colonies on all agar plates were photographed daily and measured every 24 h to evaluate the mycelial growth rate of clones. The pictures were transformed into greyscale maps, and the colony areas were calculated by pixel scale using Matlab software (MathWorks Inc., MA).
The cocultures on agar plates were incubated in the dark at 37 °C for 7 days for abundant constitutive oidia production (Kües et al. 1998 ). The spores of the entire C. cinerea colonies were harvested from plates using sterile water by scraping the mycelium and were then counted using a haematocytometer.
To test the sensitivity of mycelia to chemicals, 100 mM H 2 O 2 (Sangon Biotech, Shanghai, China) or 1 mM acetic acid (Sangon Biotech, Shanghai, China) was added to FAHX and SAHX agar plates for axenic cultures and cocultures, respectively. Mycelial growth rates were determined as described above.
All growth experiments were performed independently three times, always in triplicate plates per test case and run.
Reactive oxygen species (ROS) and H 2 O 2 assays
Intracellular ROS levels were measured using the fluorogenic probe DCFH-DA (2,7-dichlorodihydrofluorescein diacetate) (Beyotime Biotech, Shanghai, China). H 2 O 2 levels were detected using H 2 O 2 assay kits (Beyotime Biotech, Shanghai, China) as described in detail by Liu et al. ( 2022 ). C. cinerea wild type and mutant transformant from separated cocultures were harvested at 0, 12, 24, 36, 48, 60, 72, 84, and 96 h of incubation for ROS and H 2 O 2 concentration assays. All experiments were performed independently three times and all samples were examined in triplicate.
Laccase assay and native polyacrylamide gel electrophoresis (native-PAGE)
Aliquots of culture broth from separated cocultures were withdrawn every 12 h for laccase activity detection using 2,2′-azino-bis(3-ethylbenzothiazoline-6-sulfonate) (ABTS) (0.5 mM) as the substrate, following Bourbonnais and Paice ( 1990 ). Native-PAGE was performed on 12% polyacrylamide gels, as described by Pan et al. ( 2014 ). Gels were incubated at 25 °C in the citrate-phosphate buffer (pH 4.0) containing 1 mM ABTS for about 0.5 h and photographed using a digital camera.
The wild type and Ccaif1 overexpression transformants were cultured in axenic culture in FAHX medium, and 1 mM H 2 O 2 was added at 24 h of cultivation. Laccase activity in culture supernatants was determined with ABTS every 24 h and visualized on native-PAGE.
LC-MS experiments
Protein bands with laccase activity werecollected from native-PAGE gels, and proteins in gels were trypsin-digested for LC-MS analysis (Applied Protein Technology, Shanghai, China). The MaxQuant software (v.1.5.3.17) was used for MS data analyses ( https://www.maxquant.org/ ). Proteins were identified by searching the data against the annotated genome of C. cinerea downloaded in the GenBank database (Stajich et al. 2010 ).
Statistical analyses
All experimental data were presented as mean ± standard deviation (SD). Statistical significance was evaluated by one-way ANOVA followed by Student’s t -test with GraphPad Prism 7.0 (GraphPad Software, Boston, MA, USA). p < 0.05 was considered statistically significant. | Results
Apoptotic-like PCD and increased transcription of a potential AIF are observed in C. cinerea during interspecific interaction with Gongronella sp. w5
Previous comparative transcriptomic analysis using C. cinerea Okayama 7 mycelia separated in dialysis tubes while cocultured with Gongronella sp. w5 for 18 h and 28 h, respectively, revealed expression of defense strategies in C. cinerea during fungal-fungal interactions (Liu et al. 2022 ). Intracellular ROS were upregulated and acted as signal molecules to stimulate defense responses by expressing various detoxification proteins. Simultaneously, two upregulated DEGs (differentially expressed genes), CC1G_08456 (2.17-fold at 28 h compared with 18 h) and CC1G_10894 (2.36-fold at 28 h compared with 18 h), were annotated to K22745 (apoptosis-inducing factor 2, AIF2), suggesting apoptotic-like PCD occurred in C. cinerea when interacting with Gongronella sp. w5. These two genes were named Ccaif1 and Ccaif2 , respectively.
To verify this hypothesis, C. cinerea mycelia at extension fronts were firstly double-stained with the two markers Annexin V and PI that bind to externalized phosphatidylserine (PS) and nuclei, respectively. Scheme of the cell morphology assay on microscope slides of C. cinerea strains interacted with Gongronella sp. w5 was shown in Fig. 1 a. The mycelia were strongly stained by Annexin V at 1–3 days of coculture (Fig. 1 a), indicating exposure of PS on the outer leaflet of the plasma membrane and cells undergoing apoptotic-like PCD. Red staining observed at 2–4 days of coculture (Fig. 1 a) indicated that PI entered the cells upon cell membrane disintegration and confirmed the onset of PCD. Furthermore, decreased staining of Annexin V and increased staining of PI occurred upon prolonged confrontation. Compared to cells with compact well-defined nuclei in the axenic culture, the nuclei of C. cinerea during fungal-fungal interactions harbored diffuse or fragmented DNA (Fig. 1 b). Secondly, qRT-PCR was performed to confirm whether the two identified DEGs correlated with the induction of apoptotic-like PCD. As shown in Fig. 1 c, the two DEGs had no change in expression in the axenic cultures of C. cinerea over the time, whereas in C. cinerea and Gongronella sp. w5 cocultures, Ccaif1 enhanced in transcription at 12 h and peaked to 10.2-fold at 24 h, while by the end of cultivation, it was still being better expressed than in axenic cultures. In contrast, Ccaif2 showed no significant change or only marginal variation (at 24 h) in expression throughout cultivation in axenic culture as in coculture. These results suggested that Ccaif1 may play essential roles in apoptotic-like PCD induction in C. cinerea .
Phylogeny and secondary structure analysis of Cc AIF1
Ccaif1 is a gene with three introns localized on chromosome 12. Mature transcripts have an open reading frame of 1149 bp. The deduced mature protein consists of 382 amino acids (aa). Cc AIF1 (accession No. OR379352 in NCBI) has a conserved Pyr_redox domain and belongs to the pyridine nucleotide-disulfide oxidoreductase superfamily (Fig. 2 a), whose members all contain nicotinamide adenine dinucleotide phosphate [NAD(P)] binding sites (Fig. 2 a) embeded in sequences interacting with FAD (Li et al. 2022 ). BLASTP analysis in the NCBI database and the Uniprot database showed that Cc AIF1 shared sequence similarity with AIF and AMID homologs analyzed from other eukaryotic species, with the highest sequence similarity of 50% to C . neoformans Aif1. Furthermore, Cc AIF1 showed a sequence identity of 25.94% with human AMID and 20% with C. albicans AIF1 (Fig. 2 a). Homology analysis using MEGA software confirmed that Cc AIF1 had a closer relationship with Aif1 identified from the unicellular basidiomycete C . neoformans than with the homologs from other species (Fig. 2 b). Psort II ( https://psort.hgc.jp/form2.html ) analysis predicted for Cc AIF1 an expected mitochondrial location (52.2%), with a potential presequence processing site IRT|TV at aa 64 and a putative NLS (nuclear localization signal) at aa position 192-198 (PDKFRKA).
Cc AIF1 localizes in mitochondria and its overexpression in yeast induces apoptosis
Cc AIF1 was heterologously expressed in S. cerevisiae Y1H behind the yeast GAL1 promoter to further explore its physiological function. The resultant yeast Y1H- Cc AIF1 strain grew well on a glucose-containing SD medium. In contrast, once it was cultured on a galactose medium to trigger Cc AIF1 expression, the cell viability decreased (Fig. 3 a). DAPI staining showed that the number of cells showing chromatin condensation increased from 10 to 73% after adding galactose for 48 h based on counts of 500 cells each (Fig. 3 b). The fragmentation of DNA was further demonstrated by both TUNEL staining and comet experiments (Fig. 3 c; Fig. S1 ). In addition, galactose-cultured Y1H- Cc AIF1 cells were often larger and their cell surface wrinkled, similarly as described before for cells overexpressing the native yeast AMID (Li et al. 2006 ). In comparison, nuclei of cells transformed with plasmid pYES2CT were regular and homogenous in shape, even when grown on galactose medium (Fig. 3 b). These results together suggested that Ccaif1 overexpression caused apoptosis in yeast cells. Further localization assay using the Y1H-GFP- Cc AIF1 strain and the GFP- Cc AIF1 overexpression C. cinerea transformant revealed that GFP- Cc AIF1 was mainly located in mitochondria, in line with the PSORT II prediction (Fig. 3 d and e). However, upon prolonged confrontation with Gongronella sp. w5, GFP- Cc AIF1 only partially colocalized with the mitochondria marker, indicating its translocation from the IMS to the cytoplasm in C. cinerea during apoptotic-like PCD.
Ccaif1 silencing inhibits but overexpression promotes the apoptotic-like PCD in C. cinerea during coculture
C. cinerea protoplasts were transformed with Ccaif1 antisense silencing plasmid or Ccaif1 overexpression plasmid to investigate whether Cc AIF1 was involved in apoptotic-like cell death in the native host during fungal interspecific interaction. Twenty silencing transformants and four overexpression transformants were obtained in cotransformation using p Cc Ade8 and adenine as the selection marker, followed by PCR amplification of the inserted Ccaif1 antisense or Ccaif1 cDNA fragment, respectively, as proof. Based on the qRT-PCR results, the transcriptional level of Ccaif1 for all the 20 potential silencing transformants was 50–85% silenced in axenic culture compared to that observed in the wild-type strain (Fig. S2a ). Furthermore, three overexpression strains exhibited three- to fivefold increases in Ccaif1 transcription level (Fig. S2b ).
Four silencing strains named R-3, R-13, R-14, and R-20 exhibiting more than 75% interference efficiency and three overexpression strains named OV-2, OV-3, and OV-4 were chosen in the following coculture experiments. Similar to the axenic culture, transcriptional levels of Ccaif1 in the four silencing strains decreased by about 75–85% compared to the wild-type strain at 0 h in separated coculture with Gongronella sp. w5 (Fig. 4 a). During 0–60 h of coculture, Ccaif1 transcripts were all significantly less upregulated in these silencing strains than in the wild-type strain ( p < 0.01, p < 0.001, or p < 0.0001) (Fig. 4 a). By comparison, the transcriptional levels of Ccaif1 increased dramatically from 0 to 36 h in the three overexpression strains compared to that in the wild-type strain (Fig. 4 b).
Annexin V–PI staining was performed for the wild type, Ccaif1 silencing, and Ccaif1 overexpression C. cinerea mycelia at the confrontation side when interacting with Gongronella sp. w5 for 2 days on SAHX agar plates. The phenotypes of R-3 and OV-3 were randomly selected and presented in Fig. 5 . Compared with the wild-type strain, Ccaif1 silencing C. cinerea mycelia showed a slightly stronger intensity of fluorescence based on Annexin V staining but presented a substantially decreased fluorescence intensities of PI staining. In contrast, Ccaif1 overexpression C. cinerea exhibited dramatically weaker fluorescence intensities of Annexin V staining than the other two types of mycelia. However, compared with the wild-type strain, the fluorescence intensities of PI staining were increased by Ccaif1 overexpression. The mycelia from the non-confronted side of these strains were also stained and observed. No obvious difference occurred in either Annexin V or PI staining, which coincided with the result that Ccaif1 was only highly triggered to express in coculture rather than in axenic culture (Fig. 1 c). These results in consequence suggest that Ccaif1 silencing inhibits, but overexpression promotes apoptotic-like PCD in C. cinerea during interactions with Gongronella sp. w5.
Cc AIF1 is necessary for C. cinerea to confront oxidative stress in fungal-fungal interaction
To examine the effect of Cc AIF1 on the oxidative stress confrontation in C. cinerea during it faced off with Gongronella sp. w5, the growth rate of the C. cinerea transformants was first measured in axenic cultures to elucidate their sensitivity to oxidative stress. All strains showed no significant difference in mycelial growth when cultured on FAHX agar plates without chemical addition. When exposed to 100 mM H 2 O 2 or 1 mM acetic acid, Ccaif1 silencing transformants showed higher sensitivity than the wild-type strain, and their growth expansion rates were significantly reduced. In contrast, the growth rates of Ccaif1 overexpression transformants were faster than the other two types of strains (Fig. 6 ; Fig. S3 ). For example, after being exposed to 1 mM acetic acid for 5 days, the colony area of the wild-type strain was 36.99 ± 0.56 cm 2 , while in strains R-3 and OV-3, the colony areas were significantly different with 27.20 ± 1.55 cm 2 ( p < 0.05) and 39.50 ± 0.86 cm 2 ( p < 0.05), respectively.
Next, cocultures were performed to analyze the growth rate of wild type, Ccaif1 -silencing, and Ccaif1 -overexpression C. cinerea clones after antagonistic interaction with Gongronella sp. w5. The growth rate of C. cinerea was restricted after cocultivation on SAHX agar plates. Ccaif1 silencing further slowed the growth rates of C. cinerea , but Ccaif1 overexpression reversed the sensitivity of C. cinerea to Gongronella sp. w5. For instance, the colony area decreased from 24.70 ± 0.84 cm 2 in the wild-type strain to 15.69 ± 0.85 cm 2 ( p < 0.01) in the transformant R-3 but increased to 29.48 ± 0.77 cm 2 ( p < 0.05) in the transformant OV-3 (Fig. 6 ). Moreover, the absolute number of oidia of the C. cinerea wild-type strain was much higher than that of the Ccaif1 silencing transformants but lower than that of the Ccaif1 overexpression transformants after interacting with Gongronella sp. w5 for 7 days (Table S2 ). When calculating relative numbers (sum of oidia/colony area), differences were encountered but less pronounced, suggesting both effects by expression levels of Cc AIF1 and by altered growth speed on oidia production.
The intracellular ROS and H 2 O 2 concentrations of the wild-type, Ccaif1 -silencing, and Ccaif1 -overexpression C. cinerea clones were tested and compared during their separated coculture with Gongronella sp. w5 to examine the effect of Cc AIF1 on the oxidative stress confrontation in C. cinerea . The four selected silencing transformants showed 30–50% decreased ROS concentrations and 17–45% decreased H 2 O 2 concentrations compared to the parental wild-type strain during the first 12–48 h of cocultivation, whereas no significant differences were observed among the cultures at 60 and 72 h of incubation (Fig. 7 ). In the three overexpression transdformants, 15–25% increased ROS levels and 12–20% increased H 2 O 2 concentrations were observed at 12–36 h of cultivation. However, after 60 h and 72 h of coculture, their ROS levels were lower than that in the wild-type strain (Fig. 7 ).
Cc AIF1 regulates laccase Lcc9 activation in C. cinerea during fungal-fungal interaction
Previously, we demonstrated that ROS acted as signal molecules for the enhanced laccase Lcc9 production in C. cinerea when interacting with Gongronella sp. w5 (Liu et al. 2022 ). Therefore, the changes in ROS concentration in the abovementioned C. cinerea transformants may be related to Lcc9 expression. Therefore, in the following, laccase activities and transcripts were compared in transformants and the wild type during separated cocultivation with Gongronella sp. w5. As shown in Fig. 8 a, compared with the wild-type strain, all four silencing transformants exhibited 60–75% lower laccase activities over the whole coculture time, whereas the three overexpression transformants presented higher laccase activities. Native-PAGE of coculture supernatant samples at 60 h showed Lcc9 expression was significantly affected by Ccaif1 silencing or overexpression ( p < 0.05 or p < 0.01) (Fig. 8 b and c). In addition, Lcc1 and Lcc5 activities were also decreased in three of the Ccaif1 silencing transformants of the four selected Ccaif1 silenced transformants (Fig. 8 b). The lcc9 transcriptional level showed 50–80% downregulation in Ccaif1 silencing transformants and 55–75% upregulation in Ccaif1 overexpression transformants (Fig. 8 d).
To further illustrate whether Cc AIF1-mediated ROS signals could promote Lcc9 expression also in axenic cultures, the laccase activities were compared between the wild-type strain and two randomly chosen Ccaif1 overexpressing strains in response to 1 mM H 2 O 2 . The total laccase activity of the wild-type strain did not change when exposed to H 2 O 2 , whereas it was upregulated hundreds of times in the transformants OV-1 and OV-2. Specifically, the maximum activity was 684 U/L and 794 U/L in OV-1 and OV-2 cultures at 5 days of incubation, respectively (Fig. 8 e). Furthermore, native-PAGE and LC-MS analysis inferred that Lcc9 was more strongly expressed in the two Ccaif1 overexpressing transformants in axenic culture than the wild-type strain. Unexpectedly, however, two isozymes were also expressed and identified through LC-MS analysis as Lcc8 and Lcc13 (Fig. 8 f). Among them, Lcc8 contributed most of the laccase activity in the fermentation supernatant, more than Lcc9. Therefore, Cc AIF1 transmitted the H 2 O 2 signals and regulates some laccase isozyme expression, including Lcc9 in C. cinerea . As unraveled by this work, Cc AIF1 overexpression combined with H 2 O 2 stimulation is therefore a new and very effective strategy for high yield laccase production in cultures. | Discussion
Several criteria have been reported for the consensus definition of apoptotic-like PCD in fungi (Shlezinger et al. 2012 ; Hardwick 2018 ; Herrmann and Riemer 2021 ), such as the PS externalization on the outer leaflet of the plasma membrane, chromatin condensation, DNA fragmentation, and pro-apoptotic proteins releasing from the IMS (Carmona-Gutierrez et al. 2018 ). Compared with the unicellular yeast, multicellular fungi typically form a network of interconnected cells sharing a common cytoplasm and organelles (Daskalov et al. 2017 ). Several studies have focused on apoptotic-like PCD in pathogenic multicellular fungi (Cheng et al. 2003 ; Dinamarco et al. 2011 ; Banoth et al. 2020 ; Chen et al. 2021 ), whereas only a few nonpathogenic multicellular fungal species, such as C. cinerea and Pleutotus sp., were reported to show typical chromatin condensation and DNA fragmentation after entry to meiotic metaphase (Lu et al. 2003 ) and exhibit nuclear condensation, ROS accumulation, and DNA fragmentation when exposed to heat stress (Song et al. 2014 ). These studies suggest that apoptotic-like PCD might exist in all multicellular fungi. However, the roles of apoptotic-like PCD in many biological processes and regulation mechanisms remain underexplored.
In this work, we identified Cc AIF1 as an apoptosis-inducing factor acting in apoptotic-like PCD and stress-regulated gene expression in C. cinerea . CcAIF1 has the structural hallmarks of AIFs and AMIDs known from other apoptotic systems, like a conserved Pyr_redox domain, and it is predicted by PSORT II to localize in mitochondria (Fig. 2 ). However, such predictions of localization for a potential AIF need to be backed up experimentally, as they can vary widely between different AIFs and AMIDs. S. cerevisiae Aif1p (378 aa), for example, has a potential proteolytic VRL|TV cleavage motif at aa 61. It is located in mitochondria and translocates to the nucleus in response to apoptotic stimuli (Wissing et al. 2004 ), although its localization is predicted by PSORT II to be endoplasmic reticulum (ER; 44.4%). In contrast, human AMID (373 aa) attaching to the outside of mitochondria (Wu et al. 2002 ) is predicted to lack a proteolytic cleavage site and to be cytoplasmic (60.9%). All three proteins have no apparent extra typical N-terminal bipartite MLS with membrane tether unlike the larger human Aif1 (613 aa; predicted mitochondrial location 39.1%; proteolytic cleavage motif TRQ|MA at aa 62; NLS PEQKQKK at aa 106-112) that can enter the IMS (Susin et al. 1999 ; Wu et al. 2002 ; Sevrioukova 2011 ) but has in addition been detected in the ER for transport into mitochondria (Chiang et al. 2012 ). Heterologous expression in yeast and overexpression in C. cinerea both indicated Cc AIF1 to localize to mitochondria (Fig. 3 d and e). Upon expression, the yeast underwent apoptotic-like PCD (Fig. 3 a) and overexpression and silencing of Ccaif1 in the native host provided further evidence that Cc AIF1 acts in apoptotic-like PCD also in C. cinerea (Fig. 5 ).
PI is often costained with Annexin V to yield Annexin V/PI double-stained cells, marking the phenotypical shift from early stages in apoptotic-like PCD to secondary necrosis by cellular entry of PI faciliated by cellular membrane disintegration, which is classified as late apoptosis (Büttner et al. 2007 ; Rogers et al. 2017 ; Carmona-Gutierrez et al. 2018 ). In our study, when C. cinerea interacted with Gongronella sp. w5, the mycelia at extension fronts were strongly stained by Annexin V from 1 day of cultivation (Fig. 1 a), along with the increased staining of PI and nucleic DNA fragmentation seen shortly after (Fig. 1 a and b). Thus, apoptotic-like PCD occurs in C. cinerea during fungal antagonistic interactions. Cc AIF1 transcripts increased in C. cinerea wild-type strain throughout its coculture with Gongronella sp. w5 (Fig. 1 c). At the same time, Ccaif1 silencing inhibited and Ccaif1 overexpression promoted the apoptotic-like PCD process (Fig. 5 ). The results from phylogeny analysis (Fig. 2 b) and heterologous expression of Cc AIF1 in S. cerevisiae (Fig. 3 ) demonstrated furthermore that Cc AIF1 is an AIF homolog that drives apoptotic-like PCD. The induced apoptotic-like PCD is necessary for C. cinerea to antagonize Gongronella sp. w5 perhaps due to the removal of damaged cells, as silencing Ccaif1 in transformants slowed down the growth rate of C. cinerea , while overexpression of Ccaif1 reversed this phenotype (Fig. 6 ). Therefore, for the first time, we identified an AIF in multicellular basidiomycetes and demonstrated its important function involving in apoptotic-like PCD during multicellular fungal antagonistic interactions. However, the function of Cc AIF2, which harbored 60% sequence similarity with Cc AIF1, might not be related to fungal antagonism according to its unchanged transcriptional levels during coculture (Fig. 1 d). As PCD was observed also in mycelial aging and fruting body development (Lu and Sakaguchi 1991 ; Shlezinger et al. 2012 ), these AIF paralogs might work in different physiological processes and act synergistically to maintain cellular homeostasis.
AIF or AMID members are not just apoptosis-inducing factors. They are characterized by an oxidoreductase domain involved in mitochondria metabolism, redox control, and stress confrontation (Urbano et al. 2005 ; Joza et al. 2009 ; Elguindy and Nakamaru-Ogiso 2015 ; Herrmann and Riemer 2021 ). Based on the secondary structure analysis, Cc AIF1 had a conserved Pyr_redox domain (Fig. 2 a). Ccaif1 silencing led to higher sensitivity to oxidative stress caused by chemicals or the presence of Gongronella sp. w5, whereas its overexpression resulted in stronger resistance of transformants compared to the wild-type strain of C. cinerea (Fig. 6 ). These results are consistent with observations on A. nidulans aifA (Savoldi et al. 2008 ), as well as the AMID homolog of N. crassa , the deletion of which resulted in reduced resistance against chemicals or H 2 O 2 (Castro et al. 2008 ). In contrast, PaAIF2 and PaAMID2 are reported to be negatively related to oxidative stress tolerance in P. anserina (Brust et al. 2010 ). These discrepancies might be partially associated with the alternate localizations of AIF or AMID in cells (Brust et al. 2010 ). Similar to the mitochondria-localized N. crassa AIF (Castro et al. 2008 ) and C. albicans AIF1 (Ma et al. 2016 ), Cc AIF1 in this study was mitochondrial and positively regulated oxidative stress confrontation during coculture (Figs. 3 and 6 ).
Over 90% of cellular ROS are produced by mitochondria via the escape of electrons from the mitochondria electron transport system (D'Autréaux and Toledano 2007 ; Montibus et al. 2015 ). Ccaif1 overexpression resulted in higher cellular ROS and H 2 O 2 levels than in the wild-type C. cinerea strain at 12–36 h of cultivation, corresponding with upregulated Lcc9 expression and activities in both coculture and axenic culture (Figs. 7 and 8 ). Thus, Cc AIF1 might affect the mitochondria respiratory complex and act as a ROS homeostasis controller. The results also reinforced our previous conclusion that ROS contributed to lcc9 activation (Pan et al. 2014 ; Liu et al. 2022 ). Moreover, Lcc9 was demonstrated to be used as a powerful defense strategy to eliminate oxidative stress during fungal interactions (Liu et al. 2022 ). It was assumed that the induced laccase was responsible for the decreased ROS levels after 60 h of coculture in C. cinerea (Fig. 7 ). Upregulated PCD and secretion of Lcc9 together facilitated stress confrontation and enhanced cell growth of C. cinerea during interaction with Gongronella sp. w5.
Interestingly, in this study, it was observed for the first time that another two previously silent isozymes of the 17 distinct C. cinerea laccases (Kilaru et al. 2006a ; Rühl et al. 2013 ; Pan et al. 2014 ), Lcc8 and Lcc13, were expressed from their native genes in axenic culture of Ccaif1 overexpression strains treated with H 2 O 2 , with Lcc8 becoming the major isozyme (Fig. 8 ). Lcc8 has two predicted isoforms, including one of normal laccase length (567 aa) but without a predicted signal peptide and a “long” one (728 aa) with a signal sequence of 23 aa length at the N-terminus of an unusual 161 aa-long N-terminal protein extension (Schulze et al. 2019 ). Though Lcc8 is grouped into the same laccase subfamily as Lcc9, neither of the isoforms has been detected in C. cinerea wild-type cultures, probably due to the lack of splicing of a first postulated intron required to give the normal length Lcc8 version with an in-frame ATG start codon (Kilaru et al. 2006a ). Adding the C. cinerea lcc1 sequences for a functional signal peptide to the shorter basic lcc8 coding sequence proved secreted expression under control of the Agaricus bisporus gpdII promoter of functional Lcc8 laccase in transformed C. cinerea (Schulze et al. 2019 ). Lcc8 in this study migrated in native gels below laccase Lcc9, similar to Lcc1 and Lcc5 (Fig. 8 ). Whether this indicates it might be the shorter enzyme version without an apparent signal peptide and secretion of a potentially intracellur enzyme might occur by disintegration of cellular membranes need to be analyzed in further work. In any case, the activation of Lcc8 and Lcc13 here suggested that they might both be induced by ROS signals and used as defense strategies to eliminate oxidative stress as Lcc9.
In our former study, H 2 O 2 -induced oxidative stress-responsive target genes included the gene for the stress-responsive transcription factor Skn7, which is supposed to positively regulate Lcc9 expression (Ko et al. 2009 ; Liu et al. 2022 ; Yaakoub et al. 2022 ). Sequences in the DNA fragments confirmed as binding motifs of Skn7 in ChIP-Seq data of C. albicans (Basso et al. 2017 ) were found in the lcc9 promoter region (TCTAGA, − 180 to − 174 bp) and also in the promoter regions of lcc8 (TATGCA, − 334 to − 329 bp to the start codon of the long lcc8 version) and lcc13 (TCTAGA, − 162 to − 157 bp), suggesting a possible regulation by Skn7 also of these two genes. However, whether these silent laccase genes are then stress-activated and to what extent they are activated might depend on transcription factors additional to Skn7 induced by different kinds and concentrations of oxygen species (Chen et al. 2008 ; Quinn et al. 2011 ; Yaakoub et al. 2022 ). In comparison with the coculture condition (Liu et al. 2022 ), exposure to 1 mM H 2 O 2 might have triggered more or different transcription factors in the C. cinerea AIF overexpression transformants resulting in the activation of genes lcc8 and lcc13 . Thus, the mechanisms by which oxygen species regulate the transcription of different laccase genes are potentially complex and require further investigation.
In summary, we have identified a mitochondrial localized AIF homologue Cc AIF1 in the basidiomycete C. cinerea . In response to the mucoromycete Gongronella sp. w5, Cc AIF1 upregulates cellular ROS content of C. cinerea to increase Lcc9 expression and translocates to the cytoplasm to involve in apoptotic-like PCD. These comprehensive strategies facilitate C. cinerea to confront oxidative stress and enhance mycelial growth during fungal-fungal interactions (Fig. 9 ). Furthermore for biotechnological applications, our work shows that overexpression of Cc AIF1 in combination with appropriate stimulation by oxidative stress is an effective strategy to enhance laccase production in C. cinerea axenic culture. | Abstract
Apoptotic-like programmed cell death (PCD) is one of the main strategies for fungi to resist environmental stresses and maintain homeostasis. The apoptosis-inducing factor (AIF) has been shown in different fungi to trigger PCD through upregulating reactive oxygen species (ROS). This study identified a mitochondrial localized AIF homolog, Cc AIF1, from Coprinopsis cinerea monokaryon Okayama 7. Heterologous overexpression of Cc AIF1 in Saccharomyces cerevisiae caused apoptotic-like PCD of the yeast cells. Ccaif1 was increased in transcription when C. cinerea interacted with Gongronella sp. w5, accompanied by typical apoptotic-like PCD in C. cinerea , including phosphatidylserine externalization and DNA fragmentation. Decreased mycelial ROS levels were observed in Ccaif1 silenced C . cinerea transformants during cocultivation, as well as reduction of the apoptotic levels, mycelial growth, and asexual sporulation. By comparison, Ccaif1 overexpression led to the opposite phenotypes. Moreover, the transcription and expression levels of laccase Lcc9 decreased by Ccaif1 silencing but increased firmly in Ccaif1 overexpression C . cinerea transformants in coculture. Thus, in conjunction with our previous report that intracellular ROS act as signal molecules to stimulate defense responses, we conclude that Cc AIF1 is a regulator of ROS to promote apoptotic-like PCD and laccase expression in fungal-fungal interactions. In an axenic culture of C. cinerea , Cc AIF1 overexpression and H 2 O 2 stimulation together increased laccase secretion with multiplied production yield. The expression of two other normally silent isozymes, Lcc8 and Lcc13, was unexpectedly triggered along with Lcc9.
Key points
• Mitochondrial CcAIF1 induces PCD during fungal-fungal interactions
• CcAIF1 is a regulator of ROS to trigger the expression of Lcc9 for defense
• CcAIF1 overexpression and H 2 O 2 stimulation dramatically increase laccase production
Supplementary Information
The online version contains supplementary material available at 10.1007/s00253-023-12988-1.
Keywords | Supplementary Information
Below is the link to the electronic supplementary material. | Author contribution
JF: methodology, investigation. GZ: methodology, software. HZ, DX, JZ: investigation. UK: writing — review and editing. YX: resources. ZF: conceptualization, supervision, writing — review and editing, funding acquisition. JL: conceptualization, supervision, methodology, writing — original draft, writing — review and editing, funding acquisition.
Funding
This work was supported by the Chinese National Natural Science Foundation grant (Nos. 31800051, 31870098), the Science Fund for Distinguished Young Scholars of Anhui Province (2008085J12), and the Key Research Program of the Department of Education of Anhui Province (KJ2021A0056).
Data availability
All data supporting the findings of this study are available within the paper and its Supplementary Information.
Declarations
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Conflict of interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:53 | Appl Microbiol Biotechnol. 2024 Jan 13; 108(1):1-20 | oa_package/92/c3/PMC10787690.tar.gz |
|
PMC10787691 | 38217728 | Introduction
Paediatric urology arguably begins with the study of childhood stone disease which was much more prevalent in the last two centuries than today. This may have been as a result of dehydration, diarrhoeal illness, or diets consisting of predominantly single grains [ 1 ]. The first childrens’ hospital to provide exclusive medical care to infants was the Hôpital des Enfants Malades in Paris in 1802. The Hospital for Sick Children on Great Ormond Street in London opened in 1852 largely due to the efforts of Charles West, with fund-raising assistance by Charles Dickens. The American Surgical Association was founded in 1879 by Samuel David Gross, followed nine years later by the American Pediatric Society in 1888. The Society’s 4th president William Osler remarked on surgical specialisation that “The rapid increase in knowledge has made concentration in work a necessity: specialism is here, and here to stay” [ 2 ]. Following the publication of the controversial Sheppard-Townsend Act in the USA, a large group of paediatricians broke away from the American Medical Association to form the American Academy of Pediatrics in 1931, from which a Section on Urology was established in 1960. The aim was to “improve the practice, expand the knowledge base, teach our successors... and disseminate paediatric genitourinary expertise to practitioners outside of our small domain. Our goal should be nuclei of paediatric urology centres and training programs in rigorous academic milieus, not only training our successors, but also teaching general urologists, paediatric surgeons, paediatricians and others” [ 3 ].
In contrast, the Royal College of Surgeons of England had only allowed the specialist designation of urology from general surgery in 1952, and paediatric surgery (whose progress correlated with advances in critical care medicine and anaesthesiology) had only begun to take-off as a speciality during the first world war. Having slowly conceded the treatment of orthopaedics, cardiac surgery and neurosurgery, pioneers like William Ladd, Robert Gross, and Orvar Swenson refused to concede paediatric urology. Disagreements arose between urologists and paediatric general surgeons as to who should operate on Wilm’s tumours in children and who had the best five-year survival rates [ 4 ]. Thus, a dichotomy of training pathways existed for paediatric urology with the Atlantic Ocean as a metaphorical “no-man’s land”. Depending on where one trained and became established, there would—in general—only be one route by which paediatric urology training pathways would be accepted.
Sub certification in paediatric urology took 25 years to come to fruition in North America and required leadership unanimity. Paediatric urology programmes were established to provided a platform and a framework for different groups to come together under a single sub-specialised umbrella. It was similar in concept to the justification cited by Sir Denis Browne in establishing the specialty of paediatric surgery when he said, “Paediatric surgery exists as a specialty, not to establish a monopoly but to establish a standard” [ 5 – 7 ].
There is, however, no known evidence for differences in clinical/operative outcomes between each camp, nor are there differences in the number and quality of publications produced. With the current trends towards gender/sex equality (50% of medical students in the USA are female, as are 46% of paediatric urology fellows in North American training, but only 9.9% of practising urologists) [ 8 ], diversity and inclusivity, as well as a push towards multidisciplinary input. It is argued that these siloes should be removed. The aim of this study was to assess if there were any noticeable differences in certification, casemix or publishing patterns between paediatric urology-fellowship trained urology and paediatric surgery-trained paediatric urologists, and to understand attitudes between each group in terms of fellowship training or appointment to consultant/attending level. | Methods
An 18-item cross-sectional survey was compiled through the EAU Young Academic Urologists (YAU) office (Appendix 1 ) and disseminated to a trans-Atlantic convenience sample of current practising paediatric urologists. This Google Forms questionnaire was approved centrally by the YAU office and was created using a mini-Delphi method through the YAU research meetings to provide current semi-quantitative data relating to current opinions and attitudes of this cohort. Inclusion criteria for the target population were adult urologists who undertook regular paediatric work and general paediatric surgeons who considered themselves to be paediatric urologists. Exclusion criteria were those who did not undertake any paediatric urological work or did so infrequently. There were no interventions in this observational study. The survey was disseminated amongst the personal network of YAU members using email and social media focussing on highly active paediatric urologists.
The primary outcomes were to assess for evidence and duration of paediatric urology fellowship training, publishing practices and % time spent on a shortlist of index paediatric urological procedures and topics. Secondary outcomes included general case-mix and practice-types and attitudes to paediatric urology sub certification. Not all questions were answered by all respondents. There was an overall full completion rate of the survey of 93%. In those situations where questions were left unanswered, calculations were based on the those who did answer the question. Data were anonymised and analysed using GraphPad Prism 9.41 (USA; 2022). A p value of 0.05 was taken as significant. The study was deemed not to require ethics approval by the hospital research and ethics committee. | Results
A total of 228 respondents completed the survey. Due to broad dissemination on specific social media channels, it was not possible to ascertain how many viewed or partially completed and then failed to submit the survey. As the survey was anonymised, it was also not feasible to cross-check answers against peer-reviewed publications. There was a 60:40% specialty split in favour of urology, with female respondents representing 37% and 34% for urology and paediatric general surgery respectively. There were no significant differences in respondent age, experience, practice-mix, or the availability of a paediatric urology fellowship in their own institution in either group. 29% respondents undertook a paediatric urology fellowship in North America, 60% undertook a fellowship in Europe and 11% subspecialised in Australia/Asia/South America.
Those respondents who initially trained through adult urology were statistically more likely to have a higher sole commitment to paediatric urology than those who trained in paediatric generally surgery (> 80% paediatric urology 83 vs. 56%, p = 0.0001) and were more likely to provide dedicated out of hours emergency cover in paediatric urology (96 vs 80%, p = 0.002). Respondents who initially trained in adult urology were also statistically cumulatively more likely to have completed longer fellowships in paediatric urology (2 + years, 60 vs 0.40%, p = 0.006)) than those who trained in paediatric general surgery ( p = 0.02) and were more likely to publish more peer-reviewed manuscripts (5+) per annum (12 vs 3%, p = 0.02). Both groups felt that a paediatric urology fellowship was of very high importance (Table 1 ).
Respondents were also asked about clinical and operative casemix in their practice across a number of pre-determined conditions through the mini-Delphi consensus. The results illustrated that there were no differences in those who dealt with the clinical management of bladder/bowel dysfunction, neurogenic bladder, prenatal hydronephrosis, or disorders/differences in sexual development (DSD). There were also no statistically significant differences in the operative management of Mitrofanoff/Antegrade Continence Enema channel creation, hypospadias, epispadias, ureteral reimplantation, or posterior urethral valve ablation. Those who initially trained in adult urology were more likely to manage bedwetting ( p = 0.02), and to perform renal transplantation ( p = 0.05), percutaneous nephrolithotomy ( p = 0.05), flexible ureterorenoscopy ( p = 0.004) and robotic-assisted reconstructive surgery ( p = 0.03). Respondents who had initially trained in paediatric general surgery were more likely to perform laparoscopic-assisted reconstructive surgery ( p = 0.02) and to be involved with the repair of cloacal anomalies ( p = 0.01) and cloacal exstrophy ( p = 0.05) (Fig. 1 ).
Respondents were finally asked to express their views regarding eight statements relating to the provision of paediatric urology specialist care. There were no statistically significant differences in responses to the statements between groups. Nearly 90% respondents felt that there was no particular difference whether paediatric urology services were provided by those who initially trained in either adult urology or paediatric general surgery as long as the attending/consultant was appropriately fellowship trained. Similar numbers of respondents (> 80%) also felt that having a mixture of backgrounds was advantageous to patients due to the differences in skill mix and in promoting inclusion and diversity. Greater than 90% respondents felt that having a mixture of training backgrounds was advantageous to patients with respect to developing a successful adolescent/transitional care program. Nearly one third respondents felt that out-of-hours cross cover was challenging, however > 80% felt that if there was a dedicated paediatric urology out-of-hours/emergency rota that this wouldn’t be an issue. Less than 10% respondents believed that having a mixture of training backgrounds was confusing and wouldn’t work and that paediatric fellowship-trained urologists should not be performed complex paediatric urology (Fig. 2 ). | Discussion
The study has demonstrated a number of similarities and differences between paediatric urologists trained in adult urology and those trained in paediatric general surgery. The demographics and practice types were broadly similar between both groups. It is unclear as to why adult urologists tended to have longer fellowships in paediatric urology. Arguably this may be down to the fact that they are not as used to handling smaller tissue, or may be reflective of the requirements and expectations of the countries that they train and work in. Similarly, those institutions which have traditionally employed paediatric general surgeons may not necessarily require more than one year of fellowship, however the European Society of Paediatric Urology state a minimum training requirement of two years in paediatric urology ( https://www.espu.org/images/ebpu/ETR_Paediatric_Urorolgy_2020_05_04_v2.pdf ). There is no available literature around these differences. It is also likely that the increased dedication of adult-trained paediatric urologists dealing exclusively with paediatric urology for on-call rota also reflects differences in work commitments amongst hospitals where paediatric general surgeons may be required to provide emergency out-of-hours cover for general surgical conditions. This has led to significant trainee attrition over the last number of years with figures as high as 4.2% which in turn affects the ability to sub-specialise care [ 9 , 10 ].
The case mix for adult urologists and paediatric general surgeons demonstrated significant overlap, however there were subtle differences in operations which may reflect background training. The adult-trained paediatric urologists were more likely to perform endourological procedures and robotic-assisted procedures, whereas the paediatric general surgeons were more likely to perform laparoscopic-assisted procedures and those involving hindgut reconstruction. There has been a general narrowing in paediatric general surgery casemix over the last decade towards a more narrow focus of practice which may also be reflected in this [ 11 ]. Other evidence points towards a proportional increase in procedures of lesser complexity compared to prior decades with lower volume higher complexity procedures being referred to supra-specialised centres [ 12 , 13 ].
Despite these differences, the majority of respondents were unanimous in their opinions regarding pathways of training. Both urologists and paediatric surgeons believed a fellowship to be essential and broadly welcomed collaboration between both groups as they felt that it had the ability to enhance patient care, especially in the area of adolescent/transitional care. Given the concerns of paediatric urology fellows regarding job availability and the financial pressures associated with fellowship training, having an integrated approach would allow for a network effect to increase patient services, expand and enhance departments and would go some way towards reducing some of these concerns [ 14 ].
Those few countries who have traditionally kept these pathways separate (UK, Ireland) for those pursuing a career solely in paediatric urology are now under pressure to manage faculty appointment to allow for a greater access to trainee education and in the light of consultant/attending shortages. It would appear superficially intuitive that having well-trained paediatric urologists is a given in developing an integrative model of care, yet this has not to our knowledge ever been demonstrated in the literature.
This study was—as all survey studies are—limited by a certain risk of inclusion bias and an unknown number of not answered questionnaires for unknown reasons. However, with a non-directed dissemination strategy yielding 40%/60% inclusion of both specialties and with a relatively large number of respondents, we believe that the results presented are representative. Furthermore, one cannot ensure a geographically-balanced reply from different countries, in who’s individual circumstances may influence the results of such a survey. Given the findings of this study, we would strongly endorse that an integrative and collaborative approach be adopted worldwide to allow for the optimal management of these patients and in countries with both training backgrounds available, forming departments with attendings/consultants from both specialties might be an appealing option instead of fostering competition. | Conclusion
This study represents the first time that a cross-sectional cohort of paediatric urologists from different training backgrounds were compared to assess their casemix, practice patterns and attitudes. Paediatric urology is in a unique position to have two specialities in the supply chain, each adding complementary competences and nuances to the large spectrum of clinical practice. Furthermore, providing optimal transitional and lifelong care is an asset valuable to many patients. Historical barriers to practising as a paediatric urologist have no role in the modern context, and any artificial silos should be scrutinised under policies of equity, diversity and inclusivity. | Objective
To identify any self-reported differences or attitudes towards certification, publication, or practice patterns between adult urology and paediatric general surgery-trained paediatric urology providers. There are no known published differences in clinical/operative/research outcomes in either group.
Methods
An 18-item cross-sectional survey was compiled through the EAU Young Academic Urologists (YAU) office and disseminated to a trans-Atlantic convenience sample of current practising paediatric urologists. This was created using a mini-Delphi method to provide current semi-quantitative data relating to current opinions and attitudes of this cohort.
Results
A total of 228 respondents completed the survey, with female respondents representing 37% and 34% for urology and paediatric general surgery, respectively. Nearly 90% overall respondents felt that a full 2-year paediatric fellowship program was very important and 94% endorsed a collaborative dedicated paediatric urology on call service, with 92% supporting the joint development of transitional care. Urology managed higher numbers of bedwetting ( p = 0.04), bladder bowel dysfunction ( p = 0.02), endourological procedures ( p = 0.04), and robotics ( p = 0.04). Paediatric general surgery managed higher numbers of laparoscopic reconstruction ( p = 0.03), and posterior urethral valve ablation ( p = 0.002).
Conclusion
This study represents the first time that a cross-sectional cohort of paediatric urologists from different training backgrounds were compared to assess their productivity, practice patterns and attitudes. Paediatric urology is in a unique position to have two contributing specialities, with the ability to provide optimal transitional and lifelong care. We believe that there should be a strong emphasis on collaboration and to remove any historically-created barriers under policies of equity, diversity and inclusivity.
Supplementary Information
The online version contains supplementary material available at 10.1007/s00345-023-04743-y.
Keywords
Open Access funding provided by the IReL Consortium | Supplementary Information
Below is the link to the electronic supplementary material. | Author contributions
OF protocol/project development; data collection or management; data analysis; Manuscript writing/editing. tHLA protocol/project development; data collection or management; data analysis; manuscript writing/editing. BMB protocol/project development; data analysis; manuscript writing/editing. LRJM protocol/project development; data collection or management; data analysis; manuscript writing/editing. SS protocol/project development; data analysis; manuscript writing/editing. HM manuscript writing/editing; data analysis. BE manuscript writing/editing; data analysis. BN manuscript writing/editing; data analysis. DMI manuscript writing/editing; data analysis. PI protocol/project development; manuscript writing/editing. AA protocol/project development; manuscript writing/editing. SAF protocol/project development; manuscript writing/editing. HB protocol/project development; data collection or management; manuscript writing/editing. SS protocol/project development; data analysis; manuscript writing/editing.
Funding
Open Access funding provided by the IReL Consortium. Nil.
Data availability
Data is available on request.
Declarations
Conflict of interest
The authors declare that there are no actual or perceived conflicts of interest.
Ethical approval
Not applicable.
Research involving human participants and/or animals
Not applicable.
Informed consent
Not applicable. | CC BY | no | 2024-01-15 23:41:53 | World J Urol. 2024 Jan 13; 42(1):34 | oa_package/aa/ec/PMC10787691.tar.gz |
PMC10787692 | 0 | Introduction
Influenza A virus (IAV) is a respiratory pathogen, which contains a negative-sense, single-stranded RNA genome with eight viral RNA (vRNA) segments (Krammer et al. 2018 ). Each year, IAV infections lead to a high disease burden with up to 290,000–650,000 deaths globally (WHO 2023 ). Current preventive measures include annual flu vaccination and the use of antivirals. However, the update in the composition of seasonal vaccines for the northern and southern hemisphere is a time-consuming process that involves the risk of poor vaccine effectiveness due to a mismatch of vaccine strains (reviewed by Chen et al. (Chen et al. 2021 )). Moreover, the use of presently available antivirals like neuraminidase or M2 ion channel inhibitors has resulted in the emergence of resistant IAV strains (Chen et al. 2021 ). Consequently, further options for flu disease prevention and treatment would be highly appreciated.
Defective interfering particles (DIPs) of IAV are naturally occurring viral mutants that inhibit infectious standard virus (STV) propagation of IAV (Dimmock et al. 2008 ; Hein et al. 2021a ; Huo et al. 2020 ; Pelz et al. 2021 ; Zhao et al. 2018 ). In addition, DIPs with antiviral activity exist for many other virus families (Chaturvedi et al. 2021 ; Levi et al. 2021 ; Rezelj et al. 2021 ; Smither et al. 2020 ; Welch et al. 2020 ). Therefore, DIPs were suggested as promising antivirals (Bdeir et al. 2019 ; Frensing 2015 ; Genoyer and Lopez 2019 ; Karki et al. 2022 ; Vasilijevic et al. 2017 ). Conventional IAV DIPs (cDIPs) contain a large internal deletion in one of the eight vRNAs (Dimmock and Easton 2015 ). The short defective interfering (DI) vRNAs are believed to replicate faster than the parental full-length (FL) vRNA in a co-infection with STV, thereby drawing away cellular and viral resources from STV (i.e., “replication inhibition”) (Laske et al. 2016 ; Marriott and Dimmock 2010 ; Nayak et al. 1985 ; Rüdiger et al. 2021 ). As a result, IAV DIPs can suppress a variety of IAV strains including epidemic and pandemic human, and even highly pathogenic avian IAV as shown in in vitro and in animal experiments (Dimmock et al. 2008 , Dimmock et al. 2012b ; Huo et al. 2020 ; Kupke et al. 2019 ; Zhao et al. 2018 ). Simultaneously, they strongly stimulate the interferon (IFN)-induced antiviral activity against IAV infections (Frensing et al. 2014 ; Huo et al. 2020 ; Penn et al. 2022 ). Furthermore, this unspecific innate immune response stimulation can also suppress replication of unrelated viruses including severe acute respiratory syndrome coronavirus (SARS-CoV-2) (Easton et al. 2011 ; Pelz et al. 2023 ; Rand et al. 2021 ; Scott et al. 2011 ). Previously, we discovered a new type of IAV DIP named “OP7”. OP7 harbors multiple nucleotide substitutions in segment (Seg) 7 vRNA instead of the large internal deletion of cDIPs. The 37 point mutations involve promoter regions, genome packaging signals, and encoded proteins (Kupke et al. 2019 ). Relative to cDIPs, OP7 exhibited an even higher interfering efficacy in vitro and in vivo, highlighting its potential for use as an antiviral (Hein et al. 2021a , 2021c ; Rand et al. 2021 ).
Recently, we established a cell culture-based production system for “OP7 chimera DIPs” that harbor both, nucleotide substitutions in Seg 7 vRNA plus a large internal deletion in Seg 1. In the presence of cDIPs, the addition of STV is not required for their propagation, and DIP harvests do not contain any infectious material. This renders UV inactivation unnecessary and alleviates safety and regulatory concerns with respect to medical application (Dogra et al. 2023 ). For this, we modified a reverse genetics workflow (Bdeir et al. 2019 ; Hein et al. 2021a ) and reconstituted a population of two types of DIPs: OP7 chimera DIPs (Fig. 1 a) and Seg 1 cDIPs (Fig. 1 b). OP7 chimera DIPs harbor Seg 7 of OP7 (Seg 7-OP7) vRNA, a truncated Seg 1 vRNA, and the remaining six FL vRNAs. Seg 1 cDIPs contain a deletion in Seg 1 vRNA and seven FL vRNAs. To complement for the defect in virus replication, suspension Madin-Darby canine kidney (MDCK) cells were genetically engineered that express the viral polymerase basic 2 (PB2) protein (encoded on Seg 1) (Bdeir et al. 2019 ; Hein et al. 2021a ) and are used for cell culture-based production. First results with OP7 chimera DIP material harvested from shake flasks suggest a high tolerability and high antiviral efficacy after intranasal administration in mice. These initial experiments, however, only resulted in relatively low total virus titers with OP7 chimera DIP fractions of 78.7% (Dogra et al. 2023 ).
In the present work, we developed a scalable laboratory-scale process in a stirred tank bioreactor (STR) for high-yield production of almost pure OP7 chimera DIPs preparations. In perfusion mode, we even achieved a 79-fold increase in total virus yields compared to the original batch process in shake flasks. Together with a steric exclusion-based chromatographic purification train, this process may be adopted towards good manufacturing practice (GMP) production for safety and toxicology studies and clinical trials. | Materials and methods
Cells and viruses
MDCK cells growing in suspension culture (Lohr et al. 2010 ) and stably expressing PB2 (encoded by Seg 1) (Bdeir et al. 2019 ; Hein et al. 2021b ), referred to as MDCK-PB2(sus) cells, were used. These cells were cultivated in XenoTM medium (Shanghai BioEngine Sci-Tech) supplemented with 8 mM glutamine and 0.5 μg/mL puromycin as selection antibiotic and maintained in shake flasks (125 mL baffled Erlenmeyer flask with vent cap, Corning, #1356244) in 50 mL working volume (V W ). Cultivations of cell cultures were performed in an orbital shaker (Multitron Pro, Infors HT; 50 mm shaking orbit) at 185 rpm, 37 °C, and 5% CO 2 environment. MDCK(adh) cells grew in Glasgow Minimum Essential Medium (GMEM) supplemented with 10% fetal bovine serum (FBS, Merck, #F7524) and 1% peptone (Thermo Fisher Scientific, #211709). For adherent MDCK cells (ECACC, #84121903) expressing PB2 (MDCK-PB2(adh), generated by retroviral transduction, as described in (Bdeir et al. 2019 )), the medium was supplemented with 1.5 μg/mL puromycin. Human alveolar epithelial Calu-3 cells were provided by Dunja Bruder (Helmholtz Centre for Infection Research, Braunschweig, Germany) and cultivated in Minimum Essential Medium (MEM) with 10% FBS, 1% penicillin/streptomycin, and 1% sodium pyruvate at 37 °C and 5% CO 2 . Viable cell concentration (VCC), viability, and cell diameter were quantified by a cell counter (Vi-CellTM XR, Beckman coulter). Metabolite concentrations (glucose, lactate, glutamine, ammonium) were quantified with a Cedex Bio® Analyzer (Roche).
IAV strain A/PR/8/34 H1N1 (STV PR8) was provided by the Robert Koch Institute (Berlin, Germany, #3138) (seed virus: infectious virus titer 1.1 × 10 9 50% tissue culture infectious dose ((TCID 50 )/mL). The OP7 chimera DIP seed virus (4.5 × 10 6 plaque-forming units ((PFU)/mL) was previously produced in MDCK-PB2(sus) cells in batch mode after a complete medium exchange (CME) in shake flasks at a multiplicity of infection (MOI) of 10 –4 at 37 °C (Dogra et al. 2023 ). MOIs reported in the following are based on the TCID 50 titer (Genzel and Reichl 2007 ) (interference assay) or the plaque titer (OP7 chimera DIP production).
Production of OP7 chimera DIPs in shake flasks
For infection experiments in shake flasks, 250-mL shake flasks (baffled Erlenmeyer flask with vent cap, Corning, #1356246) with 100 mL V W were used. To produce OP7 chimera DIPs in batch mode, cells were infected at 2.0 × 10 6 cells/mL either by direct inoculation after a CME, or by 1:2 dilution with fresh medium (MD) of a culture grown to 4.0 × 10 6 cells/mL. Production with CME was performed as described recently (Hein et al. 2021a ). In brief, MDCK-PB2(sus) cells in exponential growth phase were centrifuged (300 × g, 5 min, room temperature (RT)). The cell pellet was resuspended in fresh medium (without puromycin) containing trypsin (final activity 20 U/mL, Thermo Fisher Scientific, #27250–018). Subsequently, cells were seeded into shake flasks and infected at a MOI of 10 –4 at about 2.0 × 10 6 cells/mL at 37 °C. For production with MD, cells were centrifuged (300 × g, 5 min, RT) and resuspended at 0.6 × 10 6 cells/mL in fresh medium (without puromycin). Next, cells were cultivated up to about 4.0 × 10 6 cells/mL and then diluted (1:2) with fresh medium containing trypsin (final activity of 20 U/mL) for subsequent infection at 37 °C or at 32 °C using indicated MOIs. For sampling, aliquots of cell suspensions were centrifuged (3000 × g, 4 °C, 10 min) and supernatants were stored at -80 °C until further analysis. From these supernatants, vRNAs of progeny virions were purified using the NucleoSpin RNA virus kit (Macherey–Nagel, #740956) according to the manufacturers’ instructions, and stored at -80°C until real-time reverse transcription-quantitative PCR (real-time RT-qPCR).
Batch mode production of OP7 chimera DIPs in a STR
Cells grown in shake flasks were centrifuged (300 × g, 5 min, RT), resuspended in fresh puromycin-free medium and used to inoculate a 1 L STR (DASGIP® Parallel Bioreactor System, Eppendorf AG, #76DG04CCBB) at 0.5 × 10 6 cells/mL (400 mL V W ). The STR was equipped with an inclined blade impeller (three blades, 30° angle, 50 mm diameter, 150 rpm) and a L-macrosparger. A mixture of air and oxygen was provided to control the dissolved oxygen above 40% air saturation. pH control (pH 7.6, reduced to 7.4 as soon set point pH 7.6 could no longer be maintained) was achieved by CO 2 sparging and addition of 7.5% NaHCO 3 . During the cell growth phase, temperature was set to 37 °C and cells were grown to about 4.0 × 10 6 cells/mL. Prior to infection, temperature was reduced to 32 °C, MD (1:2 dilution with fresh medium) was performed (final V W about 700 mL) and cells were infected at a MOI of 10 –4 and pH of 7.4.
Production of OP7 chimera DIPs in a STR in perfusion mode
An alternating tangential flow filtration (ATF2) system with C24U-v2 controller (Repligen), equipped with a hollow fiber membrane (polyethersulfone (PES), 0.2 μm pore size, Spectrum Labs) was coupled to the 1 L STR described above (final V W about 700 mL) for perfusion cultivation. Cells were inoculated at 1.2 × 10 6 cells/mL and cultivated for 1 day in batch mode. Subsequently, perfusion was started and the recirculation rate was set to 0.9 L/min. For perfusion rate control, a capacitance probe connected to an ArcView Controller 265 (Hamilton) was utilized (Göbel et al. 2023 ; Gränicher et al. 2021 ; Hein et al. 2021b , 2023 ; Nikolay et al. 2018 ; Wu et al. 2021 ). Using linear regression, the permittivity signal was converted to the VCC and used to control the permeate flow rate of a connected peristaltic pump (120 U, Watson-Marlow). The cell factor in the ArcView controller was re-adjusted after every sample taking to keep a cell-specific perfusion rate (CSPR) of 200 pL/cell/day as described previously (Hein et al. 2021b ). The feed flow rate was controlled based on the weight of the bioreactor. Prior to infection, one reactor volume (RV) was exchanged with fresh medium and temperature was lowered to 32 °C. After infection at a MOI of 10 –4 , the permeate flow rate was set to 0 RV/day for 1 h, kept constant at 2.4 RV/day until 30 h post infection (hpi) and finally increased to 2.6 RV/day. In order to prevent oxygen limitation during cell growth phase (Hein et al. 2021b ), 0.5 L/h of air was provided 77.2 h after inoculation using an additional microsparger.
Membrane-based steric exclusion chromatography
Harvested OP7 chimera DIP material was clarified (3000 × g, 10 min, 4 °C) and spiked with sucrose (5%, 84097, Merck). Next, consecutive filtration steps with regenerated cellulose membranes (1.0 μm, #10410014; 0.45 μm, #10410214; 0.2 μm, #10410314, Cytiva) were performed for clarification using a bottle top system coupled to a vacuum pump. To remove host cell DNA, the clarified OP7 chimera DIP material was supplemented with MgCl 2 (2 mM final concentration, #M8266, Merck) and treated with an unspecific nuclease (40 U/mL final activity, Denarase®, #2DN100KU99, Sartorius Stedim Biotech) for 4 h under mixing. Purification was done by membrane-based steric exclusion chromatography (SXC) (Marichal-Gallardo et al. 2017 , Marichal-Gallardo et al. 2021 ) as described recently (Hein et al. 2021a , 2021c ). An ÄKTA Pure 25 system (Cytiva) was used for chromatography at RT. UV monitoring was performed at 280 nm and virus particles were monitored using a NICOMPTM 380 (Particle Sizing Systems) at 632.8 nm. The filter unit (in the following referred to as “column”) was packed with regenerated cellulose membranes (1.0 μm pore size, 20 layers, 100 cm 2 total surface) and installed in a 25 mm stainless steel filter housing. The flow rate was 10 mL/min. For equilibration, the column was washed with water and then with binding buffer (8% PEG-6000 in PBS, #81260, Merck). Next, the sample was injected (in-line mixing with 16% PEG-6000 in PBS to achieve 8% PEG-6000). Subsequently, the column was washed with binding buffer until baseline UV absorbance was reached. Elution was conducted with 20 column volumes of elution buffer (PBS). The eluate was dialyzed overnight at 4 °C against PBS (sample to buffer ratio of 1:1000) using cellulose ester dialysis tubing (300 kDa cut-off, #GZ-02890–77). Subsequently, the material was spiked with sucrose (5%). Finally, the material was sterile filtered (0.2 μm, cellulose acetate syringe filter, #16534-K, Sartorius Stedium Biotech).
Virus quantification
Real-time RT-qPCR was used to quantify purified vRNAs of progeny virions as described previously (Kupke et al. 2019 ). Primers used for quantification of the vRNA of Seg 7-OP7 are listed in (Hein et al. 2021c ) and, for Seg 7 of the wild-type (WT) virus (Seg 7-WT), in (Dogra et al. 2023 ). The plaque assay was carried out to quantify infectious virus titers with MDCK(adh) cells (interference assay) and MDCK-PB2(adh) cells (seed virus titer of OP7 chimera DIP preparation) as described previously (Hein et al. 2021a , 2021c ; Kupke et al. 2020 ) with a measurement error of ± 0.2 log 10 . A hemagglutination assay (HA assay) was used to determine total virus titers (log 10 (HAU/100 μL)) with a measurement error of ± 0.15 log 10 (HAU/100 μL) (Kalbfuss et al. 2008 ).
The accumulated HA titer (log 10 (HAU/100 μL)) was estimated from the HA titer of the harvest in the bioreactor vessel plus the virus particles collected after the hollow fiber membrane (detected in the permeate line) and quantified according to Eq. 1 . HA B denotes the HA titer of the sample taken at the optimal harvest time point in the bioreactor vessel, V W (mL) of the bioreactor vessel, HA P the average HA titer of material collected in the permeate line between the sample time point t n and the previous sample time point t n-1 with the harvested volume (V p ).
The concentration of DIPs (c DIP , virions/mL) was calculated using Eq. 2 , where c RBC denotes the concentration of red blood chicken cells used in the HA assay (2.0 × 10 7 cells/mL).
The total number of produced virus particles vir tot (virions) was determined according to Eq. 3 . c B denotes the c DIP in the bioreactor vessel at the optimal harvest time point, and c p the average c DIP in the permeate line between t n and t n-1 .
The cell-specific virus yield (CSVY, virions/cell) was calculated using Eq. 4 , where VCC max (cells/mL) denotes the maximum VCC after time of infection (TOI).
The space–time yield (STY, virions/L/day) was determined using Eq. 5 . t tot (day) denotes the total time from inoculation until the optimal harvest time point.
The volumetric virus productivity (VVP, virions/L/day) was estimated according to Eq. 6 , where V tot denotes the total volume of the spent medium during cell growth and virus production phase.
The percentage of virus particles that passed the pores of the hollow fibers (P Perm , %) was determined according to Eq. 7 . n denotes the total number of sample time points, HA P the HA titer in the permeate line at t n , and HA B the HA titer in the bioreactor vessel at t n .
Interference assay
To determine the in vitro interfering efficacy of the produced OP7 chimera DIP material, an interference assay was used. Specifically, we evaluated the inhibition of STV propagation after co-infection with OP7 chimera DIPs. Co-infections were performed in MDCK(adh) cells (Hein et al. 2021a , 2021c ) or in Calu-3 cells. Calu-3 cells were seeded at a concentration of 3.0 × 10 6 cells/well in a 12-well plate and incubated for 24 h prior to infection. For infection, cells were washed with PBS and infected with STV PR8 at a MOI of 0.05 or co-infected with 125 μL of the produced OP7 chimera DIP material in a total volume of 250 μL of media. After 1 h, we filled up to 2 mL with medium. Supernatants were harvested at indicated time points, centrifuged at 3000 × g for 10 min at 4 °C and cell-free supernatants stored at -80°C until virus quantification. To extract intracellular RNAs, 350 μL of RA1 buffer (Macherey Nagel, #740961), 1% β-mercaptoethanol was added to cells remaining in wells for lysis. RNA purification from these lysates was carried out according to the manufacturer’s instructions and samples were stored at -80°C until real-time RT-qPCR to monitor IFN-β gene expression as described previously (Kupke et al. 2019 ; Rand et al. 2021 ). Fold changes were calculated using the ΔΔc T method.
Statistical analysis and data visualization
GraphPad Prism 9 (GraphPad Software) was used for statistical analysis and data visualization. Either one-way analysis of variance (ANOVA) followed by Tukey ́s multiple comparison test, two-way ANOVA followed by Dunnett ́s multiple comparison test, or unpaired t test were used to determine significance. | Results
Medium dilution impairs yields, whereas infection at 32 °C increases OP7 chimera DIP titers and fractions in shake flasks
Previously, a CME prior to infection has been performed for cell culture-based production of OP7 chimera DIPs (Dogra et al. 2023 ). However, this is difficult to implement at larger scales without cell retention devices. Therefore, following a cell growth phase until about 4.0 × 10 6 cells/mL, we added fresh medium (MD, 1:2 dilution) to supply substrates and reduce the level of inhibitors accumulated as by-products. To investigate whether a reduction in temperature has a positive effect on virus replication and yields (Hein et al. 2021b ; Wu et al. 2021 ), two cultivations were performed at 37 °C and 32 °C with MD; in addition, one cultivation at 37 °C with CME was performed as a control.
The infection with MD at 37 °C resulted in a similar VCC dynamics relative to the production with CME (Fig. 2 a). However, a slightly lower maximum HA titer of 2.05 log 10 (HAU/100 μL) compared to 2.20 log 10 (HAU/100 μL) was found (Fig. 2 b). The lower total virus titer is likely associated with an increased ammonium (inhibitor) concentration and a depletion of glutamine during the infection phase (Fig. S1 ). A low OP7 chimera DIP fraction of 31.6% (MD) relative to 71.2% (CME) was reached (Fig. 2 c, based on the extracellular vRNA concentration of Seg 7-OP7 and Seg 7-WT quantified by real-time RT-qPCR). Lowering the temperature to 32 °C before infection counterbalanced this negative effect of MD and resulted in higher HA titers (Fig. 2 b). Here, a maximum of 3.24 log 10 (HAU/100 μL) was observed at 44 hpi corresponding to an 11-fold increase relative to the production at 37 °C with CME. In addition, virus production at 32 °C resulted in reduced concentrations of ammonium (< 3.6 mM, Fig. S1 ), which should also favor IAV propagation. Finally, the OP7 chimera DIP fraction was greatly increased to 99.7% (Fig. 2 c), an almost pure OP7 chimera DIP preparation. To demonstrate reproducibility of this optimized production, a second production run was carried out subsequently (Fig. S2) that confirmed these findings.
Next, we tested the interfering efficacy of the produced material in vitro, in which we assessed the inhibition of STV PR8 replication during co-infection with different produced OP7 chimera DIP materials. For this, we used samples at the respective optimal harvest time points (37 °C CME: 44 hpi, 37 °C MD: 52 hpi, 32 °C MD: 44 hpi) (Fig. 2 d). Here, the HA titer almost plateaued and biological activity of the virus particles sampled is assumed highest before onset of unspecific degradation over time (Genzel et al. 2010 ). For material produced at 32 °C and MD, we observed a strong reduction of the infectious STV PR8 titer (more than three orders of magnitude), which was significantly different to the small reduction observed for material produced at 37 °C and MD ( p < 0.001, one-way ANOVA followed by Tukey’s multiple comparison test). In addition, the reduction of the infectious virus titer was significantly higher than for material produced at 37 °C and CME ( p < 0.01). Overall, this confirms a high interfering efficacy of OP7 chimera DIP preparations produced at 32 °C and MD. Regarding the total virus particle release, as expressed by the HA titer, this trend was less pronounced.
Previous studies suggested clearly that OP7 chimera DIP production and interfering efficacy strongly depend on the MOI (Dogra et al. 2023 ). Therefore, only productions performed at the optimal MOIs were shown in Fig. 2 . Interestingly, however, MOI dependency on total virus titers, OP7 chimera DIP fraction and interfering efficacy was negligible for MD and 32 °C (Fig. S3).
In summary, the optimized production at 32 °C with MD (1:2) resulted in an increase of total virus yields by 11-fold compared to the previous processes operated at 37 °C and CME. In addition, a production of almost pure OP7 chimera DIP preparation was achieved.
Batch mode production of OP7 chimera DIPs in a bioreactor and purification by SXC
In order to show that production of OP7 chimera DIPs at larger scale is possible, the process was transferred to a STR with 700 mL V W . Three independent productions were performed (STR 1, STR 2, and STR 3) and compared to two productions in shake flasks (SF 1 (Fig. 2 ), and SF 2 (Fig. S2)).
MDCK-PB2(sus) cells were seeded into the STR at approx. 0.5 × 10 6 cells/mL and cultivated (400 mL, 37 °C) until a VCC of about. 4.0 × 10 6 cells/mL (Fig. 3 a) was obtained. As in SF, cultivations were performed at 32 °C and MD (1:2) (final V W about 700 mL). Cells were infected at a MOI of 10 –4 . After infection, cells continued to grow (Fig. 3 a), with STR 1–3 and SF 2 showing very similar growth curves before onset of virus-induced cell lysis (max. VCC 2.8–3.6 × 10 6 cells/mL). SF 1 revealed a peak VCC of 5.9 × 10 6 cells/mL likely due to a more rapid growth and higher VCC at TOI. HA titers were similar ( p > 0.05, unpaired t test) (Fig. 3 b) for all STR runs compared to SF productions. (Note that STR 2 and 3 were terminated at 46 hpi and 58 hpi, respectively, for virus harvest.) In addition, all cultivations showed very high OP7 chimera DIP fractions (98.5–99.7%) at the optimal harvest time point (Fig. 3 c). Finally, results from the in vitro interference assay (Fig. 3 d) showed no significant difference in the reduction of infectious virus particle release ( p > 0.05, one-way ANOVA followed by Tukey’s multiple comparison test) and total virus particle release (HA titer) ( p > 0.05).
For virus purification, material harvested from STR 2 was subjected to SXC. The purified material was tested for antiviral efficacy in comparison to the non-purified material using the in vitro interfering assay (Fig. 3 e). There was no significant reduction in infectious virus particle release for purified material (STR 2, 1.1 × 10 5 PFU/mL) compared to non-purified material (STR 1, 4.1 × 10 5 PFU/mL) ( p > 0.05) (Fig. 3 d and e). Yet, a higher interfering efficacy of the purified material was found for diluted samples (1:50) that showed a significantly higher decrease in the release of infectious virus particles (STR 2, 2.0 × 10 6 PFU/mL) compared to the diluted non-purified material (STR 1, 1.3 × 10 8 PFU/mL) ( p < 0.001) (Fig. 3 e).
Next, we investigated the antiviral activity of the purified OP7 chimera DIP material in vitro in human alveolar epithelial (Calu-3) cells (Fig. 4 ). In contrast to MDCK(adh) cells used for this assay before (Figs. 2 d, 3 d and e), Calu-3 cells have a functional innate immune response against human IAV (Hsu et al. 2011 ; Seitz et al. 2010 ) including an IFN response that induces a cellular antiviral state. Accordingly, MDCK cells were used to only monitor replication inhibition caused by DIP co-infections, whereas the use of Calu-3 cells allowed additional contribution of innate immunity. With the Calu-3 cell assay, we observed a strong suppression of infectious virus particle release (by roughly two orders of magnitude) upon co-infection with non-purified OP7 chimera DIP preparations produced in SF at 37 °C and CME (original process) (Fig. 4 a). After process optimization (32 °C MD DSP), including STR production at 32 °C and SXC purification, the preparations appeared to interfere slightly stronger (three instead of two orders of magnitude, Fig. 4 a), but this difference was statistically not significant ( p = 0.07, one-way ANOVA followed by Tukey’s multiple comparison test). In addition, we observed an early and enhanced upregulation of IFN-β gene expression for both materials compared to STV PR8 infection alone at 6 hpi ( p < 0.0001, two-way ANOVA followed by Dunnett ́s multiple comparison test) (Fig. 4 b). This early stimulation may explain part of the inhibitory effect during OP7 chimera DIP co-infection in Calu-3 cells. (Note: There was not enough purified DIP material available that was produced at 32 °C and MD at 48 hpi to perform an analysis.)
In summary, the transfer of production from a SF to a STR resulted in similar HA titers, purity and very comparable interfering efficacies of OP7 chimera DIP harvests. SXC purification of the material obtained from STR resulted in a higher in vitro interfering efficacy in MDCK(adh) but not in Calu-3 cells. These results indicate that further scale-up to higher reactor volumes (e.g., industrial scale) should be easily accomplished.
Perfusion mode production in a bioreactor leads to high cell concentrations, superior yields, and high OP7 chimera DIP purity
Next, we evaluated the possibility of process intensification by cultivation in perfusion mode for OP7 chimera DIP production to achieve higher cell concentrations and thus, higher total virus yields (Bissinger et al. 2019 ; Wu et al. 2021 ). Therefore, we implemented a perfusion system using an ATF2 system (Hein et al. 2021b ).
Cells were seeded at 1.2 × 10 6 cells/mL into the STR (700 mL V W ) (Fig. 5 a) and perfusion mode was initiated 24 h after inoculation. During the cell growth phase (-97 to -2 hpi), a cell-specific growth rate of 0.031 h −1 was achieved, which is comparable to batch production with MD in STR (0.032–0.036 h −1 ) (Fig. 3 a). In addition, viability remained above 97% (Fig. 5 a). This indicates that the use of an ATF2 system has no negative impact on cell growth and survival. During the cell growth phase, the perfusion rate was controlled at a predefined CSPR of 200 pL/cell/day. The linear regression of the offline measured VCC and the online permittivity signal during the cell growth phase showed a R 2 of 0.997 (Fig. S4).
After 97 h, cells were infected at 24.9 × 10 6 cells/mL (as suggested by Hein et al. (Hein et al. 2021b ) at a MOI of 10 –4 . Before infection, one RV was exchanged with fresh medium (Fig. 5 b and c) by employing an average perfusion rate of 17.6 RV/day for 1 h; in addition, the temperature was lowered to 32 °C. Following infection, the perfusion rate was set at 0 RV/day for 1 h to avoid virus particle wash-out. Subsequently, medium was fed constantly (2.4 RV/day, increased to 2.6 RV/day at 30 hpi) (Fig. 5 b). Over process time, neither a glucose nor a glutamine limitation was detected (Fig. 5 c). Maximum lactate and ammonium concentrations were 34.7 mmol/L and 2.5 mmol/L, respectively (Fig. 5 c). After infection, VCC remained constant until 37 hpi, after which cell lysis started (Fig. 5 a). At 45 hpi, the HA titer peaked with 4.04 log 10 (HAU/100 μL) in the bioreactor vessel (Fig. 5 d). Also note that until time of optimal harvest part of the virus particles passed the pores of the hollow fiber membrane (0.2 μm) (P Perm = 26%) (Fig. 5 d) resulting in an accumulated HA titer (HA acc ) of 4.10 log 10 (HAU/100 μL). This corresponded to more than 14-fold higher total virus yields compared to the STRs operated in batch mode with MD (all below 3 log 10 (HAU/100 μL)) (Fig. 3 b). After time of optimal harvest, decreasing total virus titers were observed in the permeate line (Fig. 5 d), likely due to membrane fouling. Importantly, no cells passed the hollow fiber membrane (data not shown).
Table 1 summarizes HA acc , the total number of produced virus particles (vir tot ), CSVY, space–time yield (STY) and volumetric virus productivity (VVP), which were all increased compared to the STR batch process performed at 32 °C and MD, except for the VVP. Further, these coefficients were slightly increased for the perfusion process when virus particles in the permeate line were taken into account as well, relative to harvesting the bioreactor vessel alone. In addition, very high OP7 chimera DIP fractions (99.8%) were present in both the bioreactor vessel and permeate line (Fig. 5 e). Ultimately, the in vitro interfering efficacy was evaluated in MDCK(adh) cells (Fig. 5 f). At a dilution of 1:50, the material produced in the perfusion culture showed a significantly higher reduction of the infectious virus particle release compared to the batch process with MD (STR 1, Fig. 3 ) ( p < 0.001, one-way ANOVA followed by Tukey’s multiple comparison test).
Overall, we demonstrate the successful establishment of a perfusion process for cell culture-based production of OP7 chimera DIPs free of contaminating infectious STV. Besides, an increase in total virus yields and a CSVY exceeding those of conventional batch processes and very high purity of OP7 chimera DIPs (99.8%) was obtained. | Discussion
IAV DIPs are regarded as a highly interesting option for future broad-spectrum antiviral therapy (Dimmock et al. 2008 , 2012b ; Easton et al. 2011 ; Huo et al. 2020 ; Kupke et al. 2019 ; Rand et al. 2021 ; Scott et al. 2011 ; Zhao et al. 2018 ). We recently established a cell-culture-based production of OP7 chimera DIPs together with cDIPs in the absence of infectious STVs. Yet, only relatively low total virus yields and OP7 chimera DIP fractions were achieved for production in shake flasks (Dogra et al. 2023 ). Here, we present results for scalable processes in laboratory-scale STRs including batch- and perfusion mode strategies that yielded up to a 79-fold increase (perfusion) in total virus yields compared to an original batch process in shake flasks. In addition, we demonstrate the production of almost pure OP7 chimera DIP preparations (up to 99.8%), which is advantageous with respect to regulatory requirements for GMP production towards clinical development.
Effect of temperature reduction on DIP titers, purity and interfering efficacy
Our data confirms other studies reporting that a temperature reduction during the virus production phase can increase IAV yields (Fig. 2 b) (Hein et al. 2021b ; Wu et al. 2021 ). Similar findings were obtained for vesicular stomatitis virus (VSV) (Elahi et al. 2019 ), Newcastle disease virus (NDV) (Jug et al. 2023 ) and recombinant adenovirus (Jardon and Garnier 2003 ). In contrast, other studies did not see a positive effect of temperature reduction on the replication of viruses, e.g., for recombinant VSV-NDV (Göbel et al. 2023 ) and yellow fever virus (YFV) production (Nikolay et al. 2018 ). Furthermore, a reduction of temperature during virus production might also be beneficial regarding virus degradation as shown for YFV, Zika virus and IAV (Nikolay et al. 2018 ; Petiot et al. 2011 ). At lower temperatures, enzyme activities are reduced and the degradation of infectious virus particles by, e.g., proteases released by lysed cells, can be partly prevented. Eventually, a reduction in temperature to 32°C can support a shift in cellular metabolism, resulting in a reduced accumulation of ammonium, lactate and other inhibitory metabolites released in the supernatant. In our study, increased concentrations of ammonium (> 4 mM) were likely associated with lower total virus yields for OP7 chimera DIP production at 37 °C (Fig. S1 d). This is in line with a review reporting that ammonium and lactate concentrations at 2–3 mM and above 20–30 mM, respectively, can affect cell growth and virus yield, depending on the cell line (Schneider et al. 1996 ). The higher purity of OP7 chimera DIPs (up to 99.8%) for all performed runs at 32 °C might be explained by increased virus replication. As a result, OP7 chimera DIPs likely overgrew Seg 1 cDIPs due to the replication advantage of Seg 7-OP7 vRNA. The higher interfering efficacy of material produced with MD at 32 °C relative to 37 °C can be attributed to the higher total virus yield and fraction of OP7 chimera DIPs.
Previously, we showed for cultivations with CME performed at 37 °C that production and interfering efficacy of OP7 chimera DIPs was highly dependent on MOI (Dogra et al. 2023 ). For MD and 32 °C, however, total virus titers, OP7 chimera DIP fraction and interfering efficacy were almost not affected by MOI. This suggests that the selection of the optimal MOI is less important under this production condition, and process robustness could be improved. Using lower MOIs for production could reduce costs required for seed virus generation.
Process intensification using perfusion mode cultivation
Through process intensification, we achieved a high total virus yield of the OP7 chimera DIPs in perfusion culture (24.9 × 10 6 cells/mL) with strongly increased total number of virus particles and STY (up to 23-fold) relative to the batch process (2.8–3.3 × 10 6 cells/mL), while VVP was comparable (Table 1 ). Moreover, we produced similar yields of virus particles (4.10 log 10 (HAU/100 μL), a CSVY of 10648 virions/cell, 24.9 × 10 6 cells/mL, 32 °C) compared to a production of STV IAV in perfusion mode with a different suspension MDCK cell line derived from an adherent MDCK cell line originating from the American-Type Culture Collection (ATCC, MDCK ATCC CCL-34) (≥ 4.37 log 10 (HAU/100 μL), ≥ 9299 virions/cell, ≥ 43 × 10 6 cells/mL, 33 °C) (Wu et al. 2021 ). An often described phenomenon in virus production is the so-called “cell density effect”—a reduction of CSVY with an increase in VCC (Nadeau and Kamen 2003 ). This effect is often attributed to the exhaustion of nutrients and the accumulation of inhibitory by-products of metabolism including ammonium or lactate, but can be prevented by cultivation in perfusion mode (Bock et al. 2011 ; Genzel et al. 2014 ; Henry et al. 2004 ). The about 2-fold higher CSVY compared to the batch process (Table 1 ) confirmed that the “cell density effect” is not relevant for perfusion mode cultivations. Clearly, the relatively high CSPR (200 pL/cell/day) and the exchange of one RV with fresh medium prior to infection was sufficient to prevent the depletion of substrates and avoid the accumulation of ammonium and lactate as inhibiting metabolic by-products. Similar results were already reported for the production of DI244, a well-known cDIP of IAV (Dimmock et al. 2008 , 2012a ; Hein et al. 2021a ) using the same cell line (Hein et al. 2021b ). Furthermore, for other suspension MDCK cells (ATCC) cultivated in the same medium in perfusion culture, a CSPR of only 40–60 pL/cell/day was sufficient to achieve good process performance (Wu et al. 2021 ). Although, the applied high perfusion rate resulted in higher costs, the increased STY achieved relative to the batch process possibly should help to overcome this disadvantage (Göbel et al. 2022 ). Nevertheless, additional studies should be performed regarding optimal setting of the CSPR for manufacturing at final process scale.
During the cell growth phase, the perfusion rate was controlled using a capacitance probe to improve process robustness and reduce medium use as already demonstrated for other cell lines (Gränicher et al. 2021 ; Hein et al. 2021b ; Nikolay et al. 2018 ; Wu et al. 2021 ). Recent studies reported that the presence of trypsin in the virus production phase influences the permittivity signal (Petiot et al. 2017 ; Wu et al. 2021 ). To avoid an interference of trypsin on the perfusion rate control, which is based on the permittivity signal, we decided to set a constant perfusion rate after virus infection as also done by others (Hein et al. 2021b ; Vázquez-Ramírez et al. 2018 ).
Filter fouling is a typical phenomenon to be considered for use of retention devices including hollow fiber membranes (Genzel et al. 2014 ; Hein et al. 2021b ; Nikolay et al. 2020 ). For virus retention, not only the nominal pore size, but also the membrane material itself plays a crucial role (Nikolay et al. 2020 ). Furthermore, the temperature during production can affect virus retention. For IAV production at 37 °C, only a very low fraction of virus particles passed a PES hollow fiber membrane (0.2 μm pore size) (Wu et al. 2021 ) as expected (Genzel et al. 2014 ). However, reducing the temperature to 33°C at TOI allowed harvesting of a considerable percentage of virus particles via the permeate (Wu et al. 2021 ), as also shown in our study (P Perm = 26%) at a production temperature of 32 °C. In contrast, for the production of DI244, virus particles did not seem to pass the hollow fiber membrane at 32 °C. However, virus quantification in the referred study was only carried out at very late time points of production in the permeate line, so that number of virus particles passing the membrane most likely was underestimated largely (Hein et al. 2021b ). Nevertheless, filter fouling could not be prevented at later time points in our study (Fig. 5 d). Recently, a novel tubular membrane (about 10 μm pore size, Artemis Biosystems) with an ATF-2 system was successfully tested for continuous virus harvesting of DI244 with a very high cell retention efficiency (Hein et al. 2021b ). Continuous virus harvesting was also demonstrated by using the Tangential Flow Depth Filtration system (TFDF, Repligen) for lentiviral vector (Tona et al. 2023 ; Tran and Kamen 2022 ) and adeno-associated virus (Mendes et al. 2022 ) production in perfusion mode. In general, continuous virus harvesting through a membrane that allows for direct cooling of produced virus material with a first clarification improves virus stability and, therefore, yields. The use of an acoustic settler (Gränicher et al. 2020 ; Henry et al. 2004 ) or an inclined settler (Coronel et al. 2020 ) would be alternative options. Regarding the former, a more than 1.5-fold higher CSVY and VVP compared to an ATF system with a PES hollow fiber membrane (0.2 μm pore size) was be obtained for harvesting IAV (Gränicher et al. 2020 ). For the production of OP7 chimera DIPs in perfusion mode, the use of a membrane or the implementation of another perfusion system that allows for continuous virus harvest over the complete production time would likely be beneficial and should be envisaged in the design and optimization of a GMP-ready manufacturing process.
Overall, a scalable and high-yield cell culture-based production process in perfusion mode for OP7 chimera DIPs not contaminated with infectious STV and almost free of Seg 1 cDIPs is now available. Together with the encouraging data obtained from recent animal studies of OP7 chimera DIPs, this paves the way towards GMP-process development and clinical studies. | Abstract
Defective interfering particles (DIPs) of influenza A virus (IAV) are suggested for use as broad-spectrum antivirals. We discovered a new type of IAV DIP named “OP7” that carries point mutations in its genome segment (Seg) 7 instead of a deletion as in conventional DIPs (cDIPs). Recently, using genetic engineering tools, we generated “OP7 chimera DIPs” that carry point mutations in Seg 7 plus a deletion in Seg 1. Together with cDIPs, OP7 chimera DIPs were produced in shake flasks in the absence of infectious standard virus (STV), rendering UV inactivation unnecessary. However, only part of the virions harvested were OP7 chimera DIPs (78.7%) and total virus titers were relatively low. Here, we describe the establishment of an OP7 chimera DIP production process applicable for large-scale production. To increase total virus titers, we reduced temperature from 37 to 32 °C during virus replication. Production of almost pure OP7 chimera DIP preparations (99.7%) was achieved with a high titer of 3.24 log 10 (HAU/100 μL). This corresponded to an 11-fold increase relative to the initial process. Next, this process was transferred to a stirred tank bioreactor resulting in comparable yields. Moreover, DIP harvests purified and concentrated by steric exclusion chromatography displayed an increased interfering efficacy in vitro. Finally, a perfusion process with perfusion rate control was established, resulting in a 79-fold increase in total virus yields compared to the original batch process in shake flasks. Again, a very high purity of OP7 chimera DIPs was obtained. This process could thus be an excellent starting point for good manufacturing practice production of DIPs for use as antivirals.
Key points
• Scalable cell culture-based process for highly effective antiviral OP7 chimera DIPs
• Production of almost pure OP7 chimera DIPs in the absence of infectious virus
• Perfusion mode production and purification train results in very high titers
Supplementary Information
The online version contains supplementary material available at 10.1007/s00253-023-12959-6.
Keywords
Open Access funding enabled and organized by Projekt DEAL. | Supplementary Information
Below is the link to the electronic supplementary material. | Acknowledgements
The authors thank Claudia Best and Nancy Wynserski for their excellent technical assistance. We appreciate the supply of the Xeno TM medium from Shanghai BioEngine Sci-Tech and Prof. Tan from the East China University of Science and Technology. Moreover, we thank Dunja Bruder from the Helmholtz Centre for Infection Research, Braunschweig, Germany for providing Calu-3 cells.
Author contribution
Conceptualization, L.P., T.D., M.D.H., S.Y.K., Y.G., U.R.; Formal analysis, L.P.; Funding acquisition, U.R.; Investigation, L.P., T.D., P.M., G.H.; Project administration, L.P., S.Y.K.; Supervision, S.Y.K., Y.G., U.R., Visualization, L.P.; Writing – original draft, L.P., T.D.; Writing – review & editing, L.P., T.D., P.M., M.D.H., G.H., S.Y.K., Y.G., U.R.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Data availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author upon request.
Declarations
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the author.
Conflict of interest
A patent for the use of OP7 as an antiviral agent for treatment of IAV infection is pending. Patent holders are S.Y.K. and U.R. In addition a patent for the use of DI244 and OP7 as an antiviral agent for treatment of coronavirus infection is pending. Patent holders are S.Y.K., U.R., and M.D.H. | CC BY | no | 2024-01-15 23:41:53 | Appl Microbiol Biotechnol. 2024 Jan 13; 108(1):1-15 | oa_package/89/28/PMC10787692.tar.gz |
|
PMC10787693 | 38217690 | Introduction
Methicillin-resistant Staphylococcus aureus (MRSA) is a global health threat with high morbidity and mortality rates [ 1 ]. Colonization with MRSA leads to increased infection rates of up to 25% [ 2 , 3 ]. The Netherlands has one of the lowest levels of endemic MRSA in the world [ 4 ]. This low prevalence is for a large part attributed to a successful ‘search and destroy’ policy aiming at MRSA carriage, that has been executed for over three decades [ 5 ]. This policy consists of screening and pre-emptive strict isolation of patients with increased risk of MRSA carriage when hospitalized and subsequent decolonization treatment when carriage is found. Response to decolonization treatment is highly variable; in some patients, eradication treatment fails despite multiple attempts, in others colonization is self-limiting without treatment [ 6 , 7 ]. Spontaneous clearance or persistent carriership is driven by a complex host–pathogen interaction, which is largely unraveled. Furthermore, antimicrobial treatment (i.e., eradication therapy) adds to this complex interaction, and introduces pharmacodynamic and pharmacokinetic effects. In summary, patient characteristics, antibiotic regimen, and isolate characteristics are all considered to contribute to decolonization treatment outcomes [ 7 – 9 ].
Different MRSA clones have emerged throughout the world with a high variety in virulence factors [ 10 ]. The rapid developments in the field of genetic diagnostics, especially whole-genome sequencing (WGS), have expanded the knowledge of the complexity and heterogeneity of this pathogen. MRSA strains produce a broad range of virulence factors, such as toxins, immune evasion factors, and adhesion proteins [ 11 ]. These virulence determinants are mostly carried on mobile genetic elements (MGEs), such as pathogenicity islands, plasmids, or bacteriophages [ 3 ]. Furthermore, virulence determinants can vary between hospital-associated, community-associated, and livestock-associated (LA) MRSA strains [ 12 ].
WGS of MRSA strains has been deployed extensively for infection control purposes. It has proven to be of great value in the epidemiology and outbreak management of MRSA [ 13 ]. In addition, WGS allows for molecular characterization of isolates by identifying clinically relevant genetic determinants that can help to predict response to decolonization treatment. So far, microbial genomics is not yet broadly applied to identify determinants related to MRSA eradication treatment outcome [ 14 ]. As an example, the presence of Panton-Valentine leucocidin (PVL) genes and genes associated with mupirocin resistance were associated with successful eradication outcome [ 9 , 15 ]. A recent study elaborated on genetic factors and carriage duration, and showed a potential role of bacteriophage-related chemotaxis inhibitory protein encoded by chp [ 8 ]. Insight into genetic predictors of eradication failure is potentially useful in clinical practice. Ultimately, differentiating between MRSA carriers that will benefit from an eradication treatment and carriers more prone to eradication failure may enable personalized medicine.
In this explorative pilot cohort study, we evaluated genomic characteristics that are associated with MRSA decolonization failure. This was established by linking WGS data of MRSA isolates to clinical patient characteristics. | Methods
This cohort study was conducted at the University Medical Center Groningen, a tertiary hospital in the Northern part of the Netherlands, between 2017 and 2022. The prevalence of MRSA carriage in the Netherlands during this time was < 1%. During these years, genetic analyses of first MRSA isolates (both from carriage and infection) had been performed in all index patients and most of the healthcare workers, for the purpose of surveillance and outbreak management. Genetic analysis was not performed in healthcare workers who were positive at their pre-employment screening nor in positive family contacts of index patients. All patients (both adults and children) and healthcare workers of whom WGS of an MRSA isolate was performed were retrospectively identified and were screened to meet the selection criteria. Healthcare workers will be also addressed as ‘patients’ from now on in this manuscript, since they were treated as patients for this matter. Inclusion criteria were ≥ 1 visit to the outpatient infectious diseases clinic because of MRSA carriage or infection, ≥ 1 positive MRSA culture from any site, and available WGS data of the MRSA isolate. Exclusion criterion was the absence of follow-up cultures. Only the first available MRSA isolate per patient was included in the analysis. The patients had been assessed by the outpatient clinicians using protocols based on the national MRSA eradication guideline [ 16 ]. This includes in case of an MRSA infection, adequately treating the infection first, and subsequently screen for persistent colonization.
Data Collection
Clinical data were extracted from the electronic patient files. This included demographics, complicated versus uncomplicated carriage, treatment regimen, duration of therapy, and follow-up cultures. MRSA culture results were extracted from the laboratory information system. This included initial and follow-up MRSA cultures, including minimal inhibitory concentrations (MICs) of antibiotics, phenotypic susceptibility results, and WGS results.
Microbiological Methods
Culturing using BHI broth with 2.5% saline and MRSAid chromagar (bioMérieux, Lyon, France), susceptibility determination by automated susceptibility testing by VITEK2 (bioMérieux, Lyon, France), and cefoxitin disk diffusion were performed according to the Dutch Society of Medical Microbiology guideline for laboratory detection of highly resistant microorganisms as part of routine diagnostic procedures [ 17 ]. MIC breakpoints and zone diameter breakpoints for resistance and intermediate sensitivity were based on EUCAST criteria [ 18 ]. The isolates were identified as S. aureus by matrix-assisted laser desorption/ionization–time of flight mass spectrometry (Bruker Daltonics, Billerica, US). First MRSA isolates per patient were genotypically confirmed by Xpert MRSA NxG based on the detection of the mecA or mecC targets (Cepheid, Sunnyvale, US).
A total DNA extraction for whole-genome sequencing was performed directly from colonies of the respective isolates using the Ultraclean Microbial DNA Isolation Kit (MO BIO Laboratories, Carlsbad, CA, US) according to the manufacturer’s protocol. DNA concentrations were determined using a Qubit® 2.0 fluorometer and the dsDNA HS and/or BR assay kit (Life Technologies, Carlsbad, CA, US). Subsequently, DNA libraries were prepared using the Nextera XT v2 kit (Illumina, San Diego, CA, US) according to the manufacturer’s instructions. Short-read sequencing was performed with an Illumina MiSeq System generating paired-end reads of 250 bp. De novo assembly of paired-end reads was performed using CLC Genomics Workbench v12.0.1-v20.0.4 (QIAGEN, Hilden, Germany) after quality trimming (Qs ≥ 20) establishing a word size of 29.
Based on next generation sequencing data (ENA project number PRJEB59407), molecular typing was performed using Ridom Seqsphere + v8.3.1 (Ridom, Münster, Germany). Herewith multilocus sequence typing (MLST) ST type was derived and core genome multilocus sequence typing (cgMLST) was performed using a scheme including 1861 alleles [ 19 ]. Isolates with a maximum of 24 allelic differences were denominated the same complex type. Antibiotic resistance genes were identified by Resfinder v4.1 (Center for Genomic Epidemiology, Lingby, Denmark). A predefined set of virulence factors were identified using AlereMicroarray schemes in Ridom Seqsphere + v8.3.1 (Ridom, Münster, Germany) [ 20 ].
Definitions
Uncomplicated MRSA carriage was defined as having all of the following features: (i) the presence of MRSA exclusively located in the nose, (ii) no active infection with MRSA, (iii) in vitro susceptibility for mupirocin, (iv) the absence of active skin lesions, (v) the absence of foreign material that connects an internal body site with the outside (e.g., urine catheter, external fixation material), and (vi) no previously failure of decolonization treatment. All other carriage cases were considered complicated colonization. Uncomplicated carriage is advised to be treated with topical therapy (mupirocin topically applied to the nares, disinfecting shampoo) and hygienic measures. In cases of complicated MRSA carriage, additional systemic antimicrobial therapy with a combination of two antibiotic agents is recommended, according to the national guideline [ 16 ]. MRSA infection was defined as a positive culture send to the microbiology laboratory from an infected body site as indicated by the treating physician.
Successful decolonization was defined as three consecutive negative MRSA cultures from swabs taken from nose, throat, and perineum, with the cultures obtained at 1-week intervals, without antibiotic usage [ 16 ]. For analyses, patients were divided in two groups: patients with failure of eradication treatment (failure group) and patients with successful decolonization with or without preceding treatment (successful decolonization group).
Livestock-associated MRSA was defined based on the Spa-type. The Spa-types t011, t034, t108, t567, t571, t588, t753, t779, t898, t899, t943, t1184, t1197, t1254, t1255, t1451, t1456, t1457, t2123, t2287, t2329, t2330, t2383, t2582, t2748, t2971, t2974, t3013, t3014, t3053, t3146 , and t3208 were considered to be associated with livestock [ 12 ]. All other Spa-types were considered to be not associated with livestock.
Statistical Analysis
Data are presented as percentages or proportions for categorical variables and as medians plus interquartile range (IQR) for continuous variables. Univariate analysis was performed using Fisher’s exact test. As this study has an explorative character, no adjustment for multiple testing was done. | Results
During the study period, 181 patients visited the MRSA outpatient clinic. WGS was performed in 56/181 (31%) patients and these were included in the study (Fig. 1 ). As shown in Fig. 1 , there were 12 patients with treatment failure (i.e., one in the uncomplicated carriage group and eleven in the complicated carriage group). All other patients (44) were MRSA negative at the end of follow-up and were defined as successfully decolonized (three in the uncomplicated carriage group, eight with MRSA infection without subsequent carriage, ten with spontaneous decolonization and 23 with successful treatment of complicated carriage). Patient and treatment characteristics of these two groups are depicted in Table 1 . In the failure group, one patient out of twelve (8%) had uncomplicated carriage and 11/12 (92%) patients had complicated carriage. The successful decolonization group existed of 33/44 (75%) patients with complicated carriage, 3/44 (7%) patients with uncomplicated carriage, and 8/44 (18%) patients with MRSA infection, without subsequent carriage. Twenty-six out of 44 (59%) patients successfully underwent eradication treatment, in 10/44 (23%) patients colonization resolved spontaneously and 8/44 (18%) were treated for an MRSA infection, without subsequent eradication treatment. Of all 34 patients who underwent eradication treatment for complicated MRSA carriage, 11/34 (32%) had treatment failure. No significant differences in treatment characteristics were found between patients with treatment success and treatment failure (Table 1 ).
Lineages
Among the 56 MRSA isolates, 24 different MLST types were represented. The most predominant MLST types were ST5 (8/56) and ST22 (8/56), followed by ST8 (5/56) and ST398 (5/56) (Fig. 2 and Table S1). The complex types were mostly unique, only seven complex types were represented twice (2615, 4940, 6749, 9359, 10,282, 17,413, 24,737). All isolates ( n = 7) with livestock-associated Spa-types belonged to clonal complex 398. The non-livestock-associated MLST types ST1 (2/3), ST97 t2770 (2/2), ST6627 (1/1), and ST7119 (1/1) were more frequently or exclusively found in the failure group. In contrast, isolates of patients with successful decolonization predominantly belonged to community-associated lineages ST6-t304 (4/4), ST8-t008 (5/5), and the livestock-associated clonal cluster 398 (7/7) (Fig. 2 ).
Susceptibility and Resistance Genes
All MRSA isolates tested susceptible for the antibiotics used in the eradication treatments, and this was in line with the sequencing data that showed the absence of acquired resistance genes to these drugs (Table S2). Treatment failure was therefore not the result of resistance against the antibiotics used for the treatment. A significant association was found between ciprofloxacin resistance and failure of eradication (OR 4.20, 95%CI 1.11–15.96, P = 0.04) (Table 2 ). None of the patients had been treated with ciprofloxacin. The ciprofloxacin-resistant isolates belonged to ST5 (5), ST8 (2), ST22 (3), ST30 (1), ST97 (2), ST105 (1), ST398 (1), ST5544 (1), ST7119 (1), and ST8018 (1). In the ciprofloxacin-resistant isolates ( n = 18), we detected one or more of the associated point-mutations S84L (10/18) in the gyrase GyrA, S80F (14/18) or S80Y (3/18) or E84G (2/18) or I45M (1/18) in the DNA topoisomerase IV GrlA, and P585S (1/18) in GrlB (Table S3). In the isolates of all patients with treatment failure, mutations associated with ciprofloxacin resistance were identified in 7/12 (58%) of the isolates, whereas in the isolates of patients with successful decolonization, these mutations were identified in 13/44 (30%) isolates (Fig. 3 ). Two isolates with the unique point mutation I45M in GrlA did not show increased MICs to ciprofloxacin. All seven persons with ciprofloxacin-resistant MRSA with failure to eradication treatment were either healthcare workers, or most likely had acquired the MRSA during hospitalization or after medical interventions. Rifampicin resistance-associated point-mutations were found in four isolates (I527L [3/4] and D471Y [1/4] in rpoB). While all four of these isolates had a rifampicin MIC ≤ 0.03, these isolates belonged to four patients with treatment failure (Table S3). No other associations were found between phenotypic antibiotic resistance or resistance genes and failure of eradication treatment (Table 3 ).
Virulence Factors
An overview of the distribution of virulence genes among the patients with eradication failure and patients with successful decolonization is presented in Table 4 . No associations were found between virulence genes and failure of eradication. Remarkably, PVL ( lukF_PV and lukS_PV ) was found more often in patients with successful decolonization compared to the patients with eradication failure, although non-significant (30% vs 17%, P = 0.48). The genes lukF_PV and lukS_PV and spIE were significantly associated with an MRSA infection ( P < 0.05). The genes aur, hlgABC, icaACD, setB, setC, hlI, hlII, arcc, aroe, glpf, gmk, pta, tpi, yqil, isaB, lukX, lukY , and ebpS were present in all isolates and were therefore excluded from the analysis. The genes arc, edinABC, etABD, seb, sec , and sed were only sporadically present and were therefore excluded from the analysis as well. | Discussion
In this study, we explored associations between MRSA isolate characteristics, genetic determinants, and decolonization outcomes in a Dutch population of MRSA carriers in a tertiary hospital. We found an association of eradication failure with carriage of ciprofloxacin-resistant healthcare-associated lineages, whereas livestock-associated MRSA lineage ST398 and the majority of community-associated MRSA lineages ST6-t304 and ST8-t008 were associated with successful eradication treatment or spontaneous clearance.
The failure rate in eradication treatment of complex MRSA carriers was higher compared to previous reports in Dutch studies [ 5 , 7 ]. Our study was conducted in the outpatient clinic of a tertiary hospital, with consequently a more than average representation of healthcare workers or patients with an extensive history of hospitalizations. Such patients mainly carry healthcare-associated MRSAs, that are adapted to survive under harsh nosocomial conditions and antibiotic exposure.
In our study, we found an association between ciprofloxacin resistance and failure in eradication treatment. Remarkably, none of the patients had been treated with ciprofloxacin. The ciprofloxacin-resistant MRSAs in our study belonged to various lineages, including five isolates of the healthcare-associated ST5 lineage with single amino acid substitution in GrlA S80F. The mutation in this healthcare-associated lineage, and its association with fluoroquinolone resistance and the presence of virulence genes as enterotoxins, β-hemolysin converting phage, and leucocidins has been described previously [ 21 ]. The resistance to fluoroquinolones is generally high in healthcare-associated MRSA [ 22 ]. Successful hospital-adapted ciprofloxacin-resistant lineages have emerged among several nosocomial species as E. coli , K. pneumoniae , vancomycin-resistant E. faecium , and MRSA. These lineages have acquired stable point-mutations in gyrase and/or topoisomerase IV enzymes [ 23 ]. It is unsure what drives this evolution, besides the exposure to fluoroquinolones.
Both tolerance and persistence have been reported in low-level ciprofloxacin-resistant E. coli , allowing to survive exposure to therapeutic concentrations of ciprofloxacin [ 24 ]. In tolerance, bacterial cells survive using a “hibernation mode,” in which the cell cycle and metabolism are temporarily stopped, preventing killing by antibiotics. In persistence, a bacterial subpopulation is able to survive antibiotic exposure [ 25 ]. Cross-tolerance to multi-drugs has been reported, but does not necessarily occur in all tolerant isolates and is dependent on antibiotic regimen and duration of exposure [ 26 ]. To the best of our knowledge, no studies have reported cross-tolerance in low-level ciprofloxacin-resistant S. aureus isolates to the antibiotic regimens in MRSA eradication used in this study. Therefore, the explanation for the association found in our study remains uncertain. Potentially, healthcare-associated MRSAs are more prone to failure of eradication treatment, and ciprofloxacin resistance may be a biomarker for these difficult-to-treat lineages.
The recent finding of association between chp and carriage duration was not found in our study [ 8 ]. Compared to the Danish study, our patient population had more healthcare-associated MRSA. Also, there is large heterogeneity in the Danish and Dutch MRSA treatment guidelines. The main difference is the more general use of two systemic antibiotics in the Netherlands, compared to sporadic systemic treatment in Denmark.
Two studies, in Denmark and Sweden, reported that PVL-positive isolates had a higher eradication success rate [ 15 , 27 ]. We also found a higher (non-significant) rate of PVL-positive isolates in the successful eradication group, mainly belonging to the CA-MRSA linages ST30 and ST8-t008. However, associations do not necessarily reflect an etiologic cause, but can also reflect markers or confounders. We postulate that PVL is a marker of certain non-healthcare-associated MRSA lineages that are easier to eradicate, rather than a direct positive effect of the PVL toxin to eradication outcomes.
There are multiple factors of potential influence on MRSA eradication outcome. Carriers can reacquire MRSA isolates from contamination in their environment, or by positive household members. The eradication treatment of patients in this study was performed in a specialized outpatient clinic setting, following the Dutch eradication protocol [ 16 ]. Several measures are taken to prevent reacquisition, such as simultaneous treatment of positive household members and hygienic instructions. Isolate characteristics may also play a role in the risk of spread and reacquisition of MRSA. Hetem et al. showed that in a hospital setting, the transmission of livestock-associated MRSA was 4.4 times lower compared to non-livestock-associated MRSA isolates [ 12 ]. In general, MRSA isolates can be able to survive antibiotic exposure, despite having a MIC indicating susceptibility to the antibiotic agent. Our study showed that the antibiotic treatment failure is not explained by the common acquired resistance genes related to resistance, of which the presence or absence corresponded to the phenotypic susceptibility in all isolates. However, alternative survival mechanisms to antibiotic exposure, such as tolerance and persistence, are not detectable by measuring MICs. Other potential factors influencing MRSA eradication outcome, e.g., therapy incompliance and host genetics [ 28 ], were not assessed in our study.
There are some limitations of this study. It is a single-center study with a small sample size, a heterogeneous population, and a limited number of failed treatments. In addition, we did not always confirm that treatment failure was caused by the same clone, or acquisition of a different MRSA. However, given the very low prevalence of MRSA in the Netherlands, this would be highly unlikely. Furthermore, we did not correct for multiple testing. However, since it is an explorative study in a relatively undiscovered subject, we believe the results are still valid and useful in targeting future research. For this explorative purpose, we focused on pathogen factors and only added a limited number of host characteristics (i.e., sex, age, and complicated versus uncomplicated carriership). Other host factors—including host genetics—may influence the risk of treatment failure as well. Lastly, we investigated genes with a previously reported role in virulence. Future genome-wide association studies could perhaps identify signatures with novel genetic factors implicated in intracellular survival and biofilm formation that predict eradication failure. However, this requires a larger and preferably prospective data set.
In conclusion, this explorative study showed a higher eradication failure rate in complicated MRSA carriers with ciprofloxacin-resistant MRSA lineages, which are predominantly healthcare-associated. In contrast, carriers of livestock-associated MRSA and the major community-associated ST8 and ST6 lineages were generally successfully decolonized. Further studies are warranted to confirm the higher eradication failure risk of ciprofloxacin-resistant lineages, and identify the underlying mechanisms. The identification of lineages that are prone to eradication failure is of clinical relevance, since it could influence the initiation and monitoring of MRSA eradication therapy. | Methicillin-resistant Staphylococcus aureus (MRSA) colonization increases the risk of infection. Response to decolonization treatment is highly variable and determinants for successful decolonization or failure of eradication treatment are largely unknown. Insight into genetic predictors of eradication failure is potentially useful in clinical practice. The aim of this study was to explore genetic characteristics that are associated with MRSA decolonization failure. This cohort study was performed in a tertiary care hospital in the Netherlands. Patients with ≥ 1 positive MRSA culture from any site and with available whole -genome sequencing data of the MRSA isolate between 2017 and 2022 were included. Lineages, resistance, and virulence factors were stratified by MRSA decolonization outcome. In total, 56 patients were included: 12/56 (21%) with treatment failure and 44/56 (79%) with successful decolonization (with or without preceding treatment). A significant association was found between ciprofloxacin-resistant lineages and failure of eradication (OR 4.20, 95%CI 1.11–15.96, P = 0.04). Furthermore, livestock-associated MRSA and the major community-associated MRSA lineages ST6-t304 and ST8-t008 were associated with successful eradication treatment or spontaneous clearance. In conclusion, this explorative study showed a higher eradication failure rate in complicated MRSA carriers with ciprofloxacin-resistant MRSA lineages, which are predominantly healthcare-associated. Further studies are warranted to confirm the higher eradication failure risk of ciprofloxacin-resistant lineages, and identify the underlying mechanisms.
Supplementary Information
The online version contains supplementary material available at 10.1007/s00284-023-03581-w. | Supplementary Information
Below is the link to the electronic supplementary material. | Acknowledgements
The authors would like to thank Kees Swenne and Karin Wold for their invaluable help with data collection.
Funding
This project was partially funded by the Antibiotic Resistance Network Holland West.
Data Availability
The MRSA sequence data have been deposited in ENA under project number PRJEB59407.
Declarations
Conflict of interest
The authors have no conflict of interests to declare.
Ethical Approval
Approval and patient consent were waived by the medical ethical committee of the University Medical Center Groningen. | CC BY | no | 2024-01-15 23:41:53 | Curr Microbiol. 2024 Jan 13; 81(2):63 | oa_package/29/e8/PMC10787693.tar.gz |
|
PMC10787694 | 0 | Introduction
Goal Recognition is the task of recognizing the intentions of autonomous agents or humans by observing their interactions in an environment. Existing work on goal and plan recognition addresses this task over several different types of domain settings, such as plan-libraries [ 4 ], plan tree grammars [ 19 ], classical planning domain models [ 31 , 34 , 35 , 37 ], stochastic environments [ 36 ], continuous domain models [ 22 ], incomplete discrete domain models [ 29 ], and approximate control models [ 30 ]. Despite the ample literature and recent advances, most existing approaches to Goal Recognition as Planning cannot recognize temporally extended goals , i.e., goals formalized in terms of time, e.g., the exact order that a set of facts of a goal must be achieved in a plan. Recently, [ 1 ] propose a general formulation of a temporal inference problem in deterministic planning settings. However, most of these approaches also assume that the observed actions’ outcomes are deterministic and do not deal with unpredictable, possibly adversarial, environmental conditions.
Research on planning for temporally extended goals in deterministic and non-deterministic domains has increased over the years, starting with the pioneering work on planning for temporally extended goals [ 5 ] and on planning via model checking [ 12 ]. This continued with the work on integrating ltl goals into planning tools [ 27 , 28 ], and, most recently, [ 7 ], introducing a novel optimal encoding of Pure-Past Linear Temporal Logic goals into for Classical Planning . Other existing works relate program synthesis [ 33 ] with planning in non-deterministic domains for temporal specifications, recently focusing on the finite trace variants of ltl [ 2 , 9 , 10 , 14 – 16 ].
In this paper, we introduce the task of goal recognition in discrete domains that are fully observable , and the outcomes of actions and observations are non-deterministic , possibly adversarial, i.e., Fully Observable Non-Deterministic ( fond ), allowing the formalization of temporally extended goals using two types of temporal logic on finite traces: Linear-time Temporal Logic ( ltl ) and Pure-Past Linear-time Temporal Logic ( ppltl ) [ 17 ].
The main contribution of this paper is three-fold. First, based on the definition of Plan Recognition as Planning introduced in [ 34 ], we formalize the problem of recognizing temporally extended goals (expressed in ltl or ppltl ) in fond planning domains, handling both stochastic (i.e., strong-cyclic plans) and adversarial (i.e., strong plans) environments [ 2 ]. Second, we extend the probabilistic framework for goal recognition proposed in [ 35 ], and develop a novel probabilistic approach that reasons over executions of policies and returns a posterior probability distribution for the goal hypotheses. Third, we develop a compilation approach that generates an augmented fond planning problem by compiling temporally extended goals together with the original planning problem. This compilation allows us to use any off-the-shelf fond planner to perform the recognition task in fond planning models with temporally extended goals.
This work focuses on fond domains with stochastic non-determinism, and conduct an extensive set of experiments with different complex problems. We empirically evaluate our approach using different ltl and ppltl goals over six fond planning domain models, including a real-world non-deterministic domain model [ 26 ], and our experiments show that our approach is accurate to recognize temporally extended goals in different two recognition settings: offline recognition , in which the recognition task is performed in “one-shot”, and the observations are given at once and may contain missing information; and online recognition , in which the observations are received incrementally, and the recognition task is performed gradually. | Offline keyhole recognition results
We now assess how accurate our recognition approach is in the Keyhole Recognition setting. Table 1 shows three inner tables that summarize and aggregate the average results of all the six datasets for four different metrics, such as Time , TPR , FPR , and FNR . represents the average number of goals in the datasets, and | Obs | the average number of observations. Each row in these inner tables represents the observation level, varying from 10% to 100%. Figure 5 shows the performance of our approach by comparing the results using F1-Score for the six types of temporal formulas we used for evaluation. Table 2 shows in much more detail the results for each of the six datasets that have been used for evaluating of our recognition approach.
Offline results for conjunctive and eventuality goals
The first inner table shows the average results comparing the performance of our approach between conjunctive goals and temporally extended goals using the eventually temporal operator . We refer to this comparison as the baseline since these two types of goals have the same semantics. We can see that the results for these two types of goals are very similar for all metrics. Moreover, it is also possible to see that our recognition approach is very accurate and performs well at all levels of observability, yielding high TPR values and low FPR and FNR values for more than 10% of observability. Note that for 10% of observability, and ltl goals for , the TPR average value is 0.74, and it means for 74% of the recognition problems our approach recognized correctly the intended temporally extended goal when observing, on average, only 3.85 actions. Figure 5 a shows that our approach yields higher F1-Score values (i.e., greater than 0.79) for these types of formulas when dealing with more than 50% of observability.
Offline results for ltl goals
Regarding the results for the two types of ltl goals (second inner table), it is possible to see that our approach shows to be accurate for all metrics at all levels of observability, apart from the results for 10% of observability for ltl goals in which the formulas must be recognized in a certain order. Note that our approach is accurate even when observing just a few actions (2.1 for 10% and 5.4 for 30%), but not as accurate as for more than 30% of observability. Figure 5 b shows that our approach yields higher F1-Score values (i.e., greater than 0.75) when dealing with more than 30% of observability.
Offline results for ppltl goals
Finally, as for the results for the two types of ppltl goals, it is possible to observe in the last inner table that the overall average number of observations | Obs | is less than the average for the other datasets, making the task of goal recognition more difficult for the ppltl datasets. Yet, our recognition approach remains accurate when dealing with fewer observations. Moreover, the values of FNR increase for low observability, but the FPR values are, on average, inferior to 0.15. Figure 5 c shows that our approach gradually increases the F1-Score values when also increases the percentage of observability. | Related work and discussion
To the best of our knowledge, existing approaches to Goal and Plan Recognition as Planning cannot explicitly recognize temporally extended goals in non-deterministic environments. Seminal and recent work on Goal Recognition as Planning relies on deterministic planning techniques [ 31 , 34 , 37 ] for recognizing conjunctive goals. By contrast, we propose a novel problem formalization for goal recognition, addressing temporally extended goals ( ltl or ppltl goals) in fond planning domain models. While our probabilistic approach relies on the probabilistic framework of [ 35 ], we address the challenge of computing in a completely different way.
There exist different techniques to Goal and Plan Recognition in the literature, including approaches that rely on plan libraries [ 4 ], context-free grammars [ 19 ], and Hierarchical Task Network (HTN) [ 21 ]. Such approaches rely on hierarchical structures that represent the knowledge of how to achieve the possible goals, and this knowledge can be seen as potential strategies for achieving the set of possible goals. Note that the temporal constraints of temporally extended goals can be adapted and translated to such hierarchical knowledge. For instance, context-free grammars are expressive enough to encode temporally extended goals [ 11 ]. ltl has the expressive power of the star-free fragment of regular expressions and hence captured by context-free grammars. However, unlike regular expressions, ltl uses negation and conjunction liberally, and the translation to regular expression is computationally costly. Note, being equally expressive is not a meaningful indication of the complexity of transforming one formalism into another. [ 17 ] show that, while ltl and ppltl have the same expressive power, the best translation techniques known are worst-case 3EXPTIME.
As far as we know, there are no encodings of ltl -like specification languages into HTN, and its difficulty is unclear. Nevertheless, combining HTN and ltl could be interesting for further study. HTN techniques focus on the knowledge about the decomposition property of traces, whereas ltl -like solutions focus on the knowledge about dynamic properties of traces, similar to what is done in verification settings. Most recently, [ 7 ] develop a novel Pure-Past Linear Temporal Logic PDDL encoding for planning in the Classical Planning setting. | Conclusions
This article introduced a novel problem formalization for recognizing temporally extended goals , specified in either ltl or ppltl , in fond planning domain models. It also developed a novel probabilistic framework for goal recognition in such settings, and implemented a compilation of temporally extended goals that allows us to reduce the problem of fond planning for ltl / ppltl goals to standard fond planning. Our experiments have shown that our recognition approach yields high accuracy for recognizing temporally extended goals ( ltl / ppltl ) in different settings ( Keyhole Offline and Online recognition) at several levels of observability.
As future work, we intend to extend and adapt our recognition approach for being able to deal with spurious (noisy) observations, and recognize not only the temporal extended goals but also anticipate the policy that the agent is executing to achieve its goals. | Ramon Fraga Pereira
is a researcher in the field of Artificial Intelligence, particularly in the area of Automated Planning. Hisresearch interests include Goal and Plan Recognition and Heuristic Search for Sequential Decision-Making. He iscurrently a Lecturer in Symbolic AI and Autonomous Systems in the Department of Computer Science at the University ofManchester, England, UK. From April 2020 to December 2022, he was a Research Associate at the Dipartimento diIngegneria Informatica, Automatica e Gestionale (DIAG) at Sapienza Università di Roma “La Sapienza”, working withProf. Dr. Giuseppe De Giacomo on his Advanced ERC project WhiteMech. He obtained his Ph.D. degree in 2020, atPontifical Catholic University of Rio Grande do Sul (PUCRS), Porto Alegre, Brazil, under the supervision of Prof. Dr.Felipe Meneguzzi (PUCRS) and Dr. Miquel Ramírez (University of Melbourne). During his Ph.D. (from June 2018 to April2019), he was a Ph.D. Intern in the School of Computing and Information at the University of Melbourne, under thesupervision of Dr. Miquel Ramírez. In 2020, he was recognised as having the best Ph.D. thesis in Artificial Intelligence inBrazil, CTDIAC at 9th the Brazilian Conference on Intelligent Systems (BRACIS).
Francesco Fuggitti
is a Research Scientist at IBM Research AI in the AI Composition Lab, Cambridge (MA). Hisresearch interests include sequential decision-making, formal methods for Artificial Intelligence, conversationalintelligence, natural language understanding, process management and automation, and large language models. Francesco received a double Ph.D. from Sapienza University in Rome and York University in Toronto, under thesupervision of Professors Giuseppe De Giacomo and Yves Lespérance. Francesco’s Ph.D. works received internationalrecognitions, including the Best Student Paper Award and Best System Demonstration Award Runner-Up at ICAPS 2023.
Felipe Meneguzzi
is a researcher on Automated Planning, Goal and Plan recognition, Multiagent Systems, BDI Agents, andMachine Learning. He currently holds a Chair of Computing Science at the University of Aberdeen, and is am part of theAgents at Aberdeen research group. He holds a Bridges Professorship at Pontifícia Universidade Católica do Rio Grandedo Sul. He is a Senior Member of the ACM and of AAAI. He is currently a councillor in the Executive Council of the AAAI,and a board member of the Special Committee on Artificial Intelligence for the Brazilian Computer Society. As a UK-based researcher, he is a member of the UKRI Talent Panel College, and of the EPSRC Peer Review College. In Brazil, he heldCNPq’s highly productive researcher fellowship.
Goal Recognition is the task of discerning the intended goal that an agent aims to achieve, given a set of goal hypotheses, a domain model, and a sequence of observations (i.e., a sample of the plan executed in the environment). Existing approaches assume that goal hypotheses comprise a single conjunctive formula over a single final state and that the environment dynamics are deterministic, preventing the recognition of temporally extended goals in more complex settings. In this paper, we expand goal recognition to temporally extended goals in Fully Observable Non-Deterministic ( fond ) planning domain models, focusing on goals on finite traces expressed in Linear Temporal Logic ( ltl ) and Pure-Past Linear Temporal Logic ( ppltl ). We develop the first approach capable of recognizing goals in such settings and evaluate it using different ltl and ppltl goals over six fond planning domain models. Empirical results show that our approach is accurate in recognizing temporally extended goals in different recognition settings.
Keywords | Preliminaries
This section briefly recalls the syntax and semantics of Linear-time Temporal Logics on finite traces ( ltl and ppltl ) and revises the concept and terminology of fond planning.
ltl and PPLTL
Linear Temporal Logic on finite traces ( ltl ) is a variant of ltl introduced in [ 32 ] interpreted over finite traces . Given a set of atomic propositions AP , the syntax of ltl formulas is defined as follows: where a denotes an atomic proposition in AP , is the next operator, and is the until operator. Apart from the Boolean connectives, we use the following abbreviations: eventually as ; always as ; weak next . A trace is a sequence of propositional interpretations, where is the m -th interpretation of , and is the length of . We denote a finite trace formally as . Given a finite trace and an ltl formula , we inductively define when holds in at position i , written as follows: ; ; ; ; iff there exists j such that and , and for all , we have . An ltl formula is true in , denoted by , iff . As advocated by [ 17 ], this paper also uses the pure-past version of ltl , here denoted as ppltl , due to its compelling computational advantage compared to ltl when goal specifications are naturally expressed in a past fashion. ppltl refers only to the past and has a natural interpretation on finite traces: formulas are satisfied if they hold in the current (i.e., last) position of the trace.
Given a set AP of propositional symbols, ppltl formulas are defined by: where , is the before operator, and is the since operator. Similarly to ltl , common abbreviations are the once operator and the historically operator . Given a finite trace and a ppltl formula , we inductively define when holds in at position i , written as follows. For atomic propositions and Boolean operators it is as for ltl . For past operators: iff and ; iff there exists k such that and , and for all j , , we have . A ppltl formula is true in , denoted by , if and only if . A key property of temporal logics exploited in this work is that, for every ltl / ppltl formula , there exists a Deterministic Finite-state Automaton (DFA) accepting the traces satisfying [ 15 , 17 ].
fond planning
A Fully Observable Non-deterministic Domain planning model ( fond ) is a tuple [ 18 ], where is the set of possible states and is a set of fluents (atomic propositions); A is the set of actions; is the set of applicable actions in a state s ; and tr ( s , a ) is the non-empty set of successor states that follow action a in state s . A domain is assumed to be compactly represented (e.g., in PDDL [ 24 ]), hence its size is . Given the set of literals of as , every action is usually characterized by , where is the action preconditions, and is the action effects. An action a can be applied in a state s if the set of fluents in holds true in s . The result of applying a in s is a successor state non-deterministically drawn from one of the in . In fond planning, some actions have uncertain outcomes , such that they have non-deterministic effects (i.e., in all states s in which a is applicable), and effects cannot be predicted in advance. PDDL expresses uncertain outcomes using the oneof [ 8 ] keyword, as widely used by several fond planners [ 23 , 25 ]. A fond planning problem is formally defined as follows.
Definition 1
A fond planning problem is a tuple , where is a fond domain model, is an initial assignment to fluents in (i.e., initial state), and is the goal.
Solutions to a fond planning problem are policies . A policy is usually denoted as , and formally defined as a partial function mapping non-goal states into applicable actions that eventually reach a goal state complying with G from the initial state . We say a state s complies with G if . A policy for induces a set of possible executions , that are state trajectories, possibly finite (i.e., histories) , where and for , or possibly infinite , obtained by choosing some possible outcome of actions instructed by the policy. A policy is a solution to if every execution is finite and satisfies the goal G in its last state, i.e., . In this case, is winning . [ 13 ] define three solutions to fond planning problems: weak, strong and strong-cyclic solutions, formally defined in Definitions 2 , 4 , and 3 .
Definition 2
A weak solution is a policy that achieves a goal state complying with G from the initial state under at least one selection of action outcomes; namely, such solution will have some chance of achieving a goal state complying with G .
Definition 3
A strong-cyclic solution is a policy that guarantees to achieve a goal state complying with G from the initial state only under the assumption of fairness 1 . However, this type of solution may revisit states, so the solution cannot guarantee to achieve G in a fixed number of steps.
Definition 4
A strong solution is a policy that is guaranteed to achieve a goal state complying with G from the initial state regardless of the environment’s non-determinism. This type of solution guarantees the achievement of G in a finite number of steps while never visiting the same state twice.
This work focuses on strong-cyclic solutions , where the environment acts in an unknown but stochastic way. Nevertheless, our recognition approach applies to strong solutions as well, where the environment is purely adversarial (i.e., the environment may always choose effects against the agent).
Our running example comes from the well-known Triangle-Tireworld fond domain, where roads connect locations, and the agent can drive through them. The objective is to drive from one location to another. However, while driving between locations, a tire may go flat, and if there is a spare tire in the car’s location, then the car can use it to fix the flat tire. Figure 1 a illustrates a fond planning problem for the Triangle-Tireworld domain, where circles are locations, arrows represent roads, spare tires are depicted as tires, and the agent is depicted as a car. Figure 1 b shows a policy to achieve location 22. Note that, to move from location 11 to location 21, there are two arrows labeled with the action (move 11 21): (1) when moving does not cause the tire to go flat; (2) when moving causes the tire to go flat. The policy depicted in Fig. 1 b guarantees the success of achieving location 22 despite the environment’s non-determinism.
From Classical Planning , the cost for all non-deterministic instantiated actions is 1. In this example, policy , depicted in Fig. 1 b, has two possible finite executions in the set of executions , namely , such as: : [(move 11 21), (move 21 22)]; and : [(move 11 21), (changetire 21), (move 21 22)].
FOND planning for ltl and PPLTL goals
We base our approach to goal recognition in fond domains for temporally extended goals on fond planning with ltl and ppltl goals [ 9 , 10 , 14 ]. Definition 5 formalizes a fond planning problem with ltl / ppltl goals as follows.
Definition 5
A fond planning problem with ltl / ppltl goals is a tuple , where is a standard fond domain model, is the initial state, and is a goal formula, formally represented either as an ltl or a ppltl formula.
In fond planning with temporally extended goals, a policy is a partial function mapping histories , i.e., states into applicable actions. A policy for achieves a temporal formula if and only if the sequence of states generated by , despite the non-determinism of the environment, is accepted by .
Key to our recognition approach is encoding the temporal goal formula into an extended planning domain, expressed in PDDL , which can be later consumed by off-the-shelf fond planners. Compiling planning for temporally extended goals into planning for standard reachability goals (i.e., final-state goals) has a long history in the AI Planning literature. In particular, [ 6 ] develops deterministic planning with special first-order quantified ltl goals on finite-state sequences. Their technique encodes a Non-Deterministic Finite-state Automaton (NFA), resulting from ltl formulas, into deterministic planning domains for which Classical Planning technology can be leveraged. Our parameterization of objects of interest is somehow similar to their approach. Starting from [ 6 ], always in the context of deterministic planning, [ 38 ] proposed a polynomial-time compilation of ltl goals on finite-state sequences into alternating automata, leaving non-deterministic choices to be decided at planning time. Finally, [ 9 , 10 ] built upon [ 6 ] and [ 38 ], proposing a compilation in the context of fond domain models that explicitly computes the automaton representing the ltl temporal goal and encodes it into PDDL . However, this encoding introduces a lot of bookkeeping machinery due to the removal of any form of angelic non-determinism mismatching with the devilish non-determinism of PDDL for fond .
Although inspired by such work, our approach differs in several technical details. We encode the DFA directly into a non-deterministic PDDL planning domain by taking advantage of the parametric nature of PDDL domains that are then instantiated into propositional problems when solving a specific task. Given a fond planning problem represented in PDDL , the transformation works as follows. First, the highly-optimized MONA tool [ 20 ] transforms the temporally extended goal formula (formalized either in ltl or ppltl ) into its corresponding DFA . Second, from , we build a parametric DFA (PDFA), representing the lifted version of the DFA. Finally, the encoding of such a PDFA into PDDL yields an augmented fond domain model . Thus, this process reduces fond planning for ltl / ppltl to a standard fond planning problem solvable by any off-the-shelf fond planner.
Translation to parametric DFA
The use of parametric DFAs is based on the following observations. In temporal logic formulas and, hence, in the corresponding DFAs, propositions are represented by domain fluents grounded on specific objects of interest. We can replace these propositions with predicates using object variables and then have a mapping function that maps such variables into the problem instance objects. This yields a lifted and parametric representation of the DFA, i.e., PDFA, which is merged with the domain. Here, the objective is to capture the entire dynamics of the DFA within the planning domain model itself. To do so, starting from the DFA we build a PDFA whose states and symbols are the lifted versions of the ones in the DFA. Formally, to construct a PDFA we use a mapping function , which maps the set of objects of interest present in the DFA to a set of free variables. Given the mapping function , Definition 6 formalizes a PDFA as follows.
Definition 6
Given a set of object symbols , and a set of free variables , we define a mapping function m that maps each object in with a free variable in .
Given a DFA and the objects of interest for , we can construct a PDFA as follows:
Definition 7
A PDFA is a tuple , where: is the alphabet of fluents; is a nonempty set of parametric states; is the parametric initial state; is the parametric transition function; is the set of parametric final states. and can be obtained by applying to all the components of the corresponding DFA.
Example 1
Given the ltl formula “ ”, the object of interest “51” is replaced by the object variable x (i.e., ), and the corresponding DFA and PDFA for this ltl formula are depicted in Fig. 2 a and b.
When the resulting new domain is instantiated, we implicitly get back the original DFA in the Cartesian product with the original instantiated domain. Note that this way of proceeding is similar to what is done in [ 6 ], where they handle ltl goals expressed in a special fol syntax, with the resulting automata (non-deterministic Büchi automata) parameterized by the variables in the ltl formulas.
PDFA encoding in PDDL
Once the PDFA has been computed, we encode its components within the planning problem , specified in PDDL , thus, producing an augmented fond planning problem , where and is a propositional goal as in Classical Planning . Intuitively, additional parts of are used to synchronize the dynamics between the domain and the automaton sequentially. Specifically, is composed of the following components.
Fluents
has the same fluents in plus fluents representing each state of the PDFA, and a fluent called turnDomain, which controls the alternation between domain’s actions and the PDFA’s synchronization action. Formally, .
Domain actions
Actions in A are modified by adding turnDomain in preconditions and the negated turnDomain in effects: and for all .
Transition operator
The transition function of a PDFA is encoded as a new domain operator with conditional effects, called trans. Namely, and , for all . To exemplify how the transition PDDL operator is obtained, Listing 1 reports the transition operator for the PDFA in Fig. 2 .
Initial and goal states
The new initial condition is specified as . This comprises the initial condition of the previous domain D ( ) plus the initial state of the PDFA and the predicate turnDomain. Considering the example in Fig. 1 a and the PDFA in Fig. 2 b, the new initial condition is as follows in PDDL :
The new goal condition is specified as , i.e., we want the PDFA to be in one of its accepting states and turnDomain, as follows:
Note that, both in the initial and goal conditions of the new planning problem, PDFA states are grounded back on the objects of interest thanks to the inverse of the mapping .
Executions of a policy for our new fond planning problem are , where are the real domain actions, and are sequences of synchronization trans actions, which, at the end, can be easily removed to extract the desired execution . In the remainder of the paper, we refer to the compilation just exposed as fond forLTLPLTL.
Theoretical property of the PDDL encoding
We now study the theoretical properties of the encoding presented in this section. Theorem 1 states that solving fond planning for ltl / ppltl goals amounts to solving standard fond planning problems for reachability goals. A policy for the former can be easily derived from a policy for the latter.
Theorem 1
Let be a fond planning problem with an ltl / ppltl goal , and be the compiled fond planning problem with a reachability goal. Then, has a policy iff has a policy .
Proof
( ). We start with a policy of the original problem that is winning by assumption. Given , we can always build a new policy, which we call , following the encoding presented in Section 3 of the paper. The newly constructed policy will modify histories of by adding fluents and an auxiliary deterministic action , both related to the DFA associated with the ltl / ppltl formula . Now, we show that is an executable policy and that is winning for . To see the executability, observe that, by construction of the new planning problem , all action effects of the original problem are modified in a way that all action effects of the original problem are not modified and that the auxiliary action only changes the truth value of additional fluents given by the DFA (i.e., automaton states). Therefore, the newly constructed policy can be executed. To see that is winning and satisfies the ltl / ppltl goal formula , we reason about all possible executions. For all executions, every time the policy stops we can always extract an induced state trajectory of length n such that its last state will contain one of the final states of the automaton . This means that the induced state trajectory is accepted by the automaton . Then, by Theorem [ 15 , 17 ] .
( ). From a winning policy for the compiled problem, we can always project out all automata auxiliary actions obtaining a corresponding policy . We need to show that the resulting policy is winning, namely, it can be successfully executed on the original problem and satisfies the ltl / ppltl goal formula . The executability follows from the fact that the deletion of actions and related auxiliary fluents from state trajectories induced by does not modify any precondition/effect of original domain actions (i.e., ). Hence, under the right preconditions, any domain action can be executed. Finally, the satisfaction of the ltl / ppltl formula follows directly from Theorem [ 15 , 17 ]. Indeed, every execution of the winning policy stops when reaching one of the final states of the automaton in the last state , thus every execution of would satisfy . Thus, the thesis holds.
Goal recognition in fond planning domains with ltl and PPLTL goals
This section introduces the recognition approach that is able to recognizing temporally extended ( ltl and ppltl ) goals in fond planning domains. Our approach extends the probabilistic framework of [ 35 ] to compute posterior probabilities over temporally extended goal hypotheses, by reasoning over the set of possible executions of policies and the observations. This works in two stages: the compilation stage and the recognition stage . The following sections describe in detail how these two stages work. Figure 3 illustrates how our approach works.
Goal recognition problem
We define the task of goal recognition in fond planning domains with ltl and ppltl goals by extending the standard definition of Plan Recognition as Planning [ 34 ], as follows.
Definition 8
A temporally extended goal recognition problem in a fond planning setting with temporally extended goals ( ltl and/or ppltl ) is a tuple , where: is a fond planning domain; is the initial state; is the set of goal hypotheses formalized in ltl or ppltl , including the intended goal ; is a sequence of successfully executed (non-deterministic) actions of a policy that achieves the intended goal , s.t. .
Since we deal with non-deterministic domain models, an observation sequence Obs corresponds to a successful execution in the set of all possible executions of a strong-cyclic policy that achieves the actual intended hidden goal . In this work, we assume two recognition settings: Offline Keyhole Recognition , and Online Recognition . In Offline Keyhole Recognition the observed agent is completely unaware of the recognition process [ 3 ], the observation sequence Obs is given at once, and it can be either full or partial —in a full observation sequence , the recognizer has access to all actions of an agent’s plan, whereas, in a partial observation sequence , only a sub-sequence thereof. By contrast, in Online Recognition [ 39 ], the observed agent is also unaware of the recognition process, but the observation sequence is revealed incrementally instead of being given in advance and at once, as in Offline Recognition , thus making the recognition process an already much harder task.
An “ideal” solution for a goal recognition problem comprises a selection of the goal hypotheses containing only the single actual intended hidden goal that the observation sequence Obs of a plan execution achieves [ 34 , 35 ]. Fundamentally, there is no exact solution for a goal recognition problem, but it is possible to produce a probability distribution over the goal hypotheses and the observations, so that the goals that “best” explain the observation sequence are the most probable ones. A solution to a goal recognition problem in fond planning with temporally extended goals is defined in Definition 9 .
Definition 9
Solving a goal recognition problem requires selecting a temporally extended goal hypothesis such that , and it represents how well predicts or explains what observation sequence Obs aims to achieve.
Existing recognition approaches often return either a probability distribution over the set of goals [ 35 , 37 ], or scores associated with each possible goal hypothesis [ 31 ]. Our framework returns a probability distribution over the set of temporally extended goals that “best” explains the observations sequence Obs .
Probabilistic goal recognition
The probabilistic framework for Plan Recognition as Planning of [ 35 ] sets the probability distribution for every goal G in the set of goal hypotheses , and the observation sequence Obs to be a Bayesian posterior conditional probability, as follows: where is the a priori probability assigned to goal G , is a normalization factor inversely proportional to the probability of Obs , and is is the probability of obtaining Obs by executing a policy and is the probability of an agent pursuing G to select . What follows extends the probabilistic framework above to recognize temporally extended goals in fond planning domain models.
Compilation stage
We perform a compilation stage that allows us to use any off-the-shelf fond planner to extract policies for temporally extended goals. To this end, we compile and generate new fond planning domain models for the set of possible temporally extended goals using the compilation approach described in Section 3 . Specifically, for every goal , our compilation takes as input a fond planning problem , where contains the fond planning domain along with an initial state and a temporally extended goal . Finally, as a result, we obtain a new fond planning problem associated with the new domain . Note that such a new fond planning domain encodes new predicates and transitions that allow us to plan for temporally extended goals by using off-the-shelf fond planners.
Corollary 1
Let be a goal recognition problem over a set of ltl / ppltl goals and let be the compiled goal recognition problem over a set of propositional goals . Then, if has a set of winning policies that solve the set of propositional goals in , then has a set of winning policies that solve its ltl / ppltl goals.
Proof
It follows from Theorem 1 that a bijective mapping exists between policies of fond planning for ltl / ppltl goals and policies of standard fond planning. Therefore, the thesis holds.
Recognition stage
The stage that performs the goal recognition task comprises extracting policies for every goal . From such policies along with observations Obs , we compute posterior probabilities for the goals by matching the observations with all possible executions in the set of executions of the policies. To ensure compatibility with the policies, the recognizer assumes knowledge of the preference relation over actions for the observed agent when unrolling the policy during search.
Computing policies and the set of executions for
The recognizer extracts policies for every goal using the new fond planning domain models , and for each of these policies, it enumerates the set of possible executions . The aim of enumerating the possible executions for a policy is to attempt to infer what execution the observed agent is performing in the environment. Environmental non-determinism prevents the recognizer from determining the specific execution the observed agent goes through to achieve its goals. The recognizer considers possible executions that are all paths to the goal with no repeated states. The fact that the probability of entering loops multiple times is low partially justifies this assumption, and relaxing it is an important research direction for future work.
After enumerating the set of possible executions for a policy , we compute the average distance of all actions in the set of executions to a goal from initial state . Note that strong-cyclic solutions may have infinite possible executions. However, here we consider executions that do not enter loops, and for those entering possible loops, we consider only the ones entering loops at most once. Indeed, the occurrence of possibly repeated actions does not affect the computation of the average distance. In other words, if the observed agent executes the same action repeatedly often, it does not change its distance to the goal. The average distance aims to estimate “how far” every observation is to goal . This average distance is computed because some executions may share the same action in execution sequences but at different time steps. We refer to this average distance as . For example, consider the policy depicted in Fig. 1 b. This policy has two possible executions for achieving a goal from the initial state, and these two executions share some actions, such as (move 11 21). In particular, this action appears twice in Fig. 1 b due to its uncertain outcome. Therefore, this action has two different distances (if we count the number of remaining actions towards a goal) to the goal: , if the outcome of this action generates the state ; and , if the outcome of this action generates the state . Hence, since this policy has two possible executions, and the sum of the distances is 3, the average distance for this action to a goal is . The average distances for the other actions in this policy are: for (changetire 21), because it appears only in one execution; and for (move 21 22), because the execution of this action achieves a goal.
We use to compute an estimated score that expresses “how far” every observed action in the observation sequence Obs is to a temporally extended goal in comparison to the other goals in the set of goal hypotheses . This means that the goal(s) with the lowest score(s) along the execution of the observed actions is (are) the one(s) that, most likely, the observation sequence Obs aims to achieve. Note that, the average distance for those observations that are not in the set of executions of a policy , is set to a large constant number, i.e., to . As part of the computation of this estimated score , we compute a penalty value that directly affects the estimated score . This penalty value represents a penalization that aims to increase the estimated score for those goals in which each pair of subsequent observations in Obs does not have any relation of order in the set of executions of these goals. We use the Euler constant e to compute this penalty value , formally defined as , in which is the set of order relation of an execution , where Equation ( 4 ) formally defines the computation of the estimated score for every goal given a pair of subsequent observations , and the set of goal hypotheses .
Example 2
To exemplify the computation of the estimated score for every goal , consider the recognition problem in Fig. 4 : is vAt (11); the goal hypotheses are expressed as ltl goals, such that , and ; . The intended goal is . Before computing the estimated score for the goals, we first perform the compilation process presented before. Afterward, we extract policies for every goal , enumerate the possible executions for the goals from the extracted policies, and then compute the average distance of all actions in the set of executions for the goals from . The number of possible executions for the goals are: , and . The average distances of all actions in for the goals are as follows: : (move 11 21) = 4.5, (changetire 21) = 4, (move 21 31) = 3, (changetire 31) = 2.5, (move 31 41) = 1.5, (changetire 41) = 1, (move 41 51) = 0; : (move 11 21) = 4.5, (changetire 21) = 4, (move 21 22) = 3, (changetire 22) = 2.5, (move 22 23) = 1.5, (changetire 23) = 1, (move 23 33): 0; : (move 11 21) = 6, changetire 21) = 5.5, (move 21 22) = 4.5, (changetire 22) = 4, (move 22 23) = 3, (changetire 23) = 2.5, (changetire 24) = 1, (move 23 24) = 1.5, (move 24 15) = 0.
Once having the average distances of the actions in for all goals, we can then compute the estimated score for for every observation : 0.43, 0.43, 0.57; and 61.87, 0.016, 0.026. Note that for the observation , the average distance for is because this observation is not an action for one of the executions in the set of executions for this goal ( Obs aims to achieve the intended goal ). Furthermore, the penalty value is applied to , i.e., . It is possible to see that the estimated score of the intended goal is always the lowest for all observations Obs , especially when observing the second observation . Note that our approach correctly infers the intended goal , even when observing with just few actions.
Computing posterior probabilities for
To compute the posterior probabilities over the set of possible temporally extended goals , we start by computing the average estimated score for every goal for every observation , and we formally define this computation as , as follows: The average estimated score aims to estimate “how far” a goal is to be achieved compared to other goals ( ) averaging among all the observations in Obs . The lower the average estimated score to a goal , the more likely such a goal is to be the one that the observed agent aims to achieve. Consequently, has two important properties defined in ( 5 ), as follows.
Proposition 1
Given that the sequence of observations Obs corresponds to an execution that aims to achieve the actual intended hidden goal , the average estimated score outputted by will tend to be the lowest for in comparison to the scores of the other goals ( ), as observations increase in length.
Proposition 2
If we restrict the recognition setting and define that the goal hypotheses are not sub-goals of each other, and observe all observations in Obs (i.e., full observability), we will have the intended goal with the lowest score among all goals, i.e., is the case that .
After defining the computation of the average estimated score for the goals using ( 5 ), we can define how our approach tries to maximize the probability of observing a sequence of observations Obs for a given goal , as follows: Thus, by using the estimated score in ( 6 ), we can infer that the goals with the lowest estimated score will be the most likely to be achieved according to the probability interpretation from ( 5 ). For instance, consider the goal recognition problem presented in Example 2 , and the estimated scores we computed for the temporally extended goals , , and based on the observation sequence Obs . From this, we have the following probabilities for the goals: After normalizing the probabilities using the normalization factor 2 , and assuming that the prior probability is equal to every goal in the set of goals , ( 6 ) computes the posterior probabilities ( 1 ) for the temporally extended goals . A solution to a recognition problem (Definition 8 ) is a set of temporally extended goals with the maximum probability : . Hence, considering the normalizing factor and the probabilities computed before, we then have the following posterior probabilities for the goals in Example 2 : ; ; and . Recall that in Example 2 , is , and according to the computed posterior probabilities, we then have , so our approach yields only the intended goal by observing just two observations.
Using the average distance and the penalty value allows our approach to disambiguate similar goals during the recognition stage. For instance, consider the following possible temporally extended goals: and . Here, both goals have the same formulas to be achieved, i.e., and , but in a different order. Thus, even having the same formulas to be achieved, the sequences of their policies’ executions are different. Therefore, the average distances are also different, possibly a smaller value for the temporally extended goal that the agent aims to achieve, and the penalty value may also be applied to the other goal if two subsequent observations do not have any order relation in the set of executions for this goal.
Computational analysis
The most expensive computational part of our recognition approach is computing the policies for the goal hypotheses . Thus, our approach requires calls to an off-the-shelf fond planner. Hence, the computational complexity of our recognition approach is linear in the number of goal hypotheses . In contrast, to recognize goals and plans in Classical Planning settings, the approach of [ 35 ] requires calls to an off-the-shelf Classical planner. Concretely, to compute , Ramirez and Geffner’s approach computes two plans for every goal and based on these two plans, they compute a cost-difference between these plans and plug it into a Boltzmann equation. For computing these two plans, this approach requires a non-trivial transformation process that modifies both the domain and problem, i.e., an augmented domain and problem that compute a plan that complies with the observations, and another augmented domain and problem to compute a plan that does not comply with the observations. Essentially, the intuition of Ramirez and Geffner’s approach is that the lower the cost-difference for a goal, the higher the probability for this goal, much similar to the intuition of our estimated score .
Experiments and evaluation
This section details experiments and evaluations carried out to validate the effectiveness of our recognition approach. The empirical evaluation covers thousands of goal recognition problems using well-known fond planning domain models with different types of temporally extended goals expressed in ltl and ppltl .
The source code of our PDDL encoding for ltl and ppltl goals 3 and our temporally extended goal recognition approach 4 , as well as the recognition datasets and results are available on GitHub.
Domains, recognition datasets, and setup
The experiments and evaluation analysis employ six different well-known fond planning domain models: Blocks-World , Logistics , Tidy-up , Tireworld , Triangle-Tireworld , and Zeno-Travel . Most of them are commonly used in the AI Planning community to evaluate fond planners [ 23 , 25 ]. The domain models involve practical real-world applications, such as navigating, stacking, picking up and putting down objects, loading and unloading objects, loading and unloading objects, and etc. Some domains combine more than one of the characteristics above, namely, Logistics , Tidy-up [ 26 ], and Zeno-Travel , which involve navigating and manipulating objects in the environment. In practice, our recognition approach is capable of recognizing not only the set of facts of a goal that an observed agent aims to achieve from a sequence of observations, but also the temporal order (e.g., exact order ) in which the agent aims to achieve this set of facts that represents a temporally extended goal. For instance, for Tidy-up , is a real-world application domain, in which the purpose is defining planning tasks for a household robot that could assist elder people in smart-home application, our approach would be able to monitor and assist the household robot to achieve its goals in a specific order.
Based on these fond planning domain models, we build different recognition datasets: a baseline dataset using conjunctive goals ( ) and datasets with ltl and ppltl goals.
The ltl datasets use three types of goals: , where is a propositional formula expressing that eventually will be achieved. This temporal formula is analogous to a reachability goal; , expressing that must hold before holds. For instance, we can define a temporal goal that expresses the order in which a set of packages in Logistics domain should be delivered; : must hold until is achieved. For the Tidy-up domain, we can define a temporal goal that no one can be in the kitchen until the robot cleans the kitchen. The ppltl datasets use two types of goals: , expressing that holds and held once. For instance, in the Blocks-World domain, we can define a past temporal goal that only allows stacking a set of blocks (a, b, c) once another set of blocks has been stacked (d, e); , expressing that the formula holds and since held was not true anymore. For instance, in Zeno-Travel , we can define a past temporal goal expressing that person1 is at city1 and since the person2 is at city1, the aircraft must not pass through city2 anymore. Thus, in total, there are six different recognition datasets over the six fond planning domains and temporal formulas presented above. Each of these datasets contains hundreds of recognition problems ( 390 recognition problems per dataset), such that each recognition problem in these datasets is comprised of a fond planning domain model , an initial state , a set of possible goals (expressed in either ltl or ppltl ), the actual intended hidden goal in the set of possible goals , and the observation sequence Obs . Note that the set of possible goals contains very similar goals (i.e., and ), and all possible goals can be achieved from the initial state by a strong-cyclic policy. For instance, for the Tidy-up domain, we define the following ltl goals as possible goals : ; ; ; ; Note that some of the goals described above share the same formulas and fluents, but some of these formulas must be achieved in a different order, e.g., and , and and . Note that our recognition approach is very accurate in discerning (Table 1 ) the order that the intended goal aims to be achieved based on few observations (executions of the agent in the environment).
As mentioned earlier in the paper, an observation sequence contains a sequence of actions that represent an execution in the set of possible executions of policy that achieves the actual intended hidden goal , and as before, this observation sequence Obs can be full or partial. To generate the observations Obs for and build the recognition problems, our approach extracts strong-cyclic policies using different fond planners, such as PRP and MyND. A full observation sequence represents an execution (a sequence of executed actions) of a strong-cyclic policy that achieves the actual intended hidden goal , i.e., 100% of the actions of being observed. A partial observation sequence is represented by a sub-sequence of actions of a full execution that aims to achieve the actual intended hidden goal (e.g., an execution with “missing” actions, due to a sensor malfunction). In our recognition datasets, we define four levels of observability for a partial observation sequence: 10%, 30%, 50%, or 70% of its actions being observed. For instance, for a full observation sequence Obs with 10 actions (100% of observability), a corresponding partial observations sequence with 10% of observability would have only one observed action, and for 30% of observability three observed actions, and so on for the other levels of observability.
We ran all experiments using PRP [ 25 ] planner with a single core of a 12 core Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz with 16GB of RAM, set a maximum memory usage limit of 8GB, and set a 10-minute timeout for each recognition problem. We are unable to provide a direct comparison of our approach against existing recognition approaches in the literature because most of these approaches perform a non-trivial process that transforms a recognition problem into planning problems to be solved by a planner [ 35 , 37 ]. Even adapting such a transformation to work in fond settings with temporally extended goals, one cannot guarantee that it will work properly in the problem setting introduced in this paper.
Evaluation metrics
Our evaluation uses widely known metrics in the Goal and Plan Recognition literature [ 31 , 34 , 39 ]. To evaluate our approach in the Offline Keyhole Recognition setting, we use four metrics, as follows: True Positive Rate ( TPR ) measures the fraction of times that the intended hidden goal was correctly recognized, e.g., the percentage of recognition problems that our approach correctly recognized the intended goal. A higher TPR indicates better accuracy, measuring how often the intended hidden goal had the highest probability among the possible goals. TPR ( 7 ) is the ratio between true positive results 5 , and the sum of true positive and false negative results 6 ; False Positive Rate ( FPR ) is a metric that measures how often goals other than the intended goal are recognized (wrongly) as the intended ones. A lower FPR indicates better accuracy. FPR is the ratio between false positive results 7 , and the sum of false positive and true negative results 8 ; False Negative Rate ( FNR ) aims to measure the fraction of times in which the intended correct goal was recognized incorrectly. A lower FNR indicates better accuracy. FNR ( 9 ) is the ratio between false negative results and the sum of false negative and true positive results; F1-Score ( 10 ) is the harmonic mean of precision and sensitivity (i.e., TPR ), representing the trade-off between true positive and false positive results. The highest possible value of an F1-Score is 1.0, indicating perfect precision and sensitivity, and the lowest possible value is 0. Thus, higher F1-Score values indicate better accuracy. In contrast, to evaluate our approach in the Online Recognition setting, we use the following metric: Ranked First is a metric that measures the number of times the intended goal hypothesis has been correctly ranked first as the most likely intended goal, and higher values for this metric indicate better accuracy for performing online recognition.
In addition to the metrics mentioned above, we also evaluate our recognition approach in terms of recognition time ( Time ), which is the average time in seconds to perform the recognition process (including the calls to a fond planner);
Online recognition results
With the experiments and evaluation in the Keyhole Offline recognition setting in place, we now proceed to present the experiments and evaluation in the Online recognition setting. As before, performing the recognition task in the Online recognition setting is usually harder than in the offline setting, as the recognition task has to be performed incrementally and gradually, and the recognizer sees the observations step-by-step, rather than performing the recognition task by analyzing all observations at once, as in the offline recognition setting.
Figure 6 exemplifies the evaluation in the Online recognition setting. This uses the Ranked First metric, which measures how many times over the observation sequence the correct intended goal has been ranked first as the top-1 goal over the goal hypotheses . The recognition problem example depicted in Fig. 6 has five goal hypotheses (y-axis), and ten actions in the observation sequence (x-axis). As stated before, the recognition task in the Online setting is done gradually, step-by-step, so at every step our approach essentially ranks the goals according to the probability distribution over the goal hypotheses . The example in Fig. 6 shows the correct goal Ranked First six times (at the observation indexes: 4, 6, 7, 8, 9, and 10) over the observation sequence with ten observation, so it means that the goal correct intended goal is Ranked First (i.e., as the top-1 , with the highest probability among the goal hypotheses ) 60% of the time in the observation sequence for this recognition example.
Figure 7 aggregates the average recognition results of all the six datasets for the Ranked First metric as a histogram, by considering full observation sequences that represent executions (sequences of executed actions) of strong-cyclic policies that achieves the actual intended goal . The results represent the overall percentage (including the standard deviation – black bars) of how many times the of time that the correct intended goal has been ranked first over the observations. The average results indicated our approach to be in general accurate to recognize correctly the temporal order of the facts in the goals in the Online recognition setting, yielding Ranked First percentage values greater than 58%.
Figures 8 , 9 , 10 , 11 , 10 , 12 , and 13 shows the Online recognition results separately for all six domains models and the different types of temporally extended goals. By analyzing the Online recognition results more closely, one can see that our approach converges to rank the correct goal as the top-1 mostly after a few observations. This means that it is commonly hard to disambiguate among the goals at the beginning of the execution, which, in turn, directly affects the overall Ranked First percentage values (as shown in Fig. 7 ). Here, our approach struggles to disambiguate and recognize correctly the intended goal for some recognition problems and some types of temporal formulas. Namely, our approach has struggled to disambiguate when dealing with ltl Eventuality goals in Blocks-World (see Fig. 8 a), for most temporal extended goals in Tidy-Up (see Fig. 10 ), and for ltl Eventuality goals in Zeno-Travel (see Fig. 13 a). | Acknowledgements
This work has been partially supported by the ERC-ADGWhiteMech (No. 834228), the EU ICT-48 2020 project TAILOR (No. 952215), the PRIN project RIPER (No. 20203FFYLK),and the PNRR MUR project FAIR (No. PE0000013). | CC BY | no | 2024-01-15 23:41:53 | Appl Intell (Dordr). 2024 Dec 14; 54(1):470-489 | oa_package/bc/1b/PMC10787694.tar.gz |
|
PMC10787695 | 0 | Introduction
High-rate anaerobic wastewater treatment applies granular biofilms to achieve stable and efficient production of biogas (Lettinga et al. 1983 ). Comprised of tightly interconnected bacteria and archaea of diverse metabolic functionalities, granular anaerobic biofilms (GAB) carry the full anaerobic digestion of organic substrates through hydrolysis, acidogenesis, acetogenesis and methanogenesis. Since the discovery of GAB in the 1980s, most of the studies focused on deciphering the physico-chemical factors affecting the formation and stability of these aggregates (Feldman et al. 2017 ). To contrast, studies of physiological and molecular principles governing the assembly and organization of microorganisms within the granules have received far less attention (Dang et al. 2022 ; Chen et al. 2022 ). This knowledge gap is especially standing out in comparison to the information collected from the organization of related aerobic granular biofilms or activated sludge (Wilén et al. 2018 ). Above all, most of the knowledge in the field of biofilm research is almost explicitly focused on the surface-attached microbial growth, with vertical or horizontal cell expansion, in either environmental or clinical settings, but not on the suspended aggregate growth, as is the case for GAB (Sauer et al. 2022 ).
From the studies of aerobic and facultatively anaerobic biofilms, it is known that intracellular signalling (with second messenger molecules, like cyclic-AMP and cyclic-di-GMP) and quorum sensing (QS) play a crucial role in regulating the microbial assembly (Papenfort and Bassler 2016 ). Through inter- and intracellular communication, microorganisms can synchronize their activity and regulate gene expression within the whole community (Eberl et al. 1996 ; McNab et al. 2003 ; Egland et al. 2004 ; Gantner et al. 2006 ; Evans et al. 2018 ; Prentice et al. 2022 ). Depending on the amount of the QS signalling molecules, such as N-acyl homoserine lactones (AHLs), microorganisms can switch back and forth from an active dispersion growth to a sedentary biofilm-producing mode. Although well-studied QS systems are still restricted to the bacterial communities, a recent report points to the presence of AHL-analogous signalling activity in biofilm-forming archaea (Charlesworth et al. 2020 ).
In the case of suspended biofilms, like GAB and its aerobic counterpart, the occurrence of QS has been hypothesized based on the detection of AHLs in organic extracts from both sludge and bioreactor media. It has been observed that granulation, considering both aerobic and anaerobic granular systems, is generally correlated with increased levels of AHLs in the sludge phase (although amounts of specific AHLs increased only after the first aggregates are formed) (Chen et al. 2019 ; Tan et al. 2014 ; Yan et al. 2021 ). Furthermore, the addition of synthetic AHLs led to an increased production of extracellular polymeric substances (EPS) by the mixed microbial community within the aerobic and anaerobic sludges, with longer acyl chains (C8-C12) homoserine lactones having a more positive influence on the EPS production in the matured GAB (Lv et al. 2018 ; Ma et al. 2018 ; Mit Prohim et al. 2023 ). It was also shown that microbial communities within aerobic sludge can respond differently to AHLs with varying acyl chain length within a broad concentration range from 10 ng/L to 100 μg/L (Wang et al. 2021 ). For example, a steady supply of C6-homoserine lactones in high quantity negatively affected the aggregate-forming ability of aerobic sludge and destabilized microbial cell-to-cell adhesions (Shi et al. 2022 ). Therefore, QS might indeed be relevant for the suspended microbial aggregation, and AHLs can be influencing the biosynthesis and accumulation of EPS in the sludge of mixed microbial communities.
While available knowledge of QS signalling within aerobic and facultatively anaerobic clinically relevant microorganisms can be used as a starting point to learn about the potential QS mechanisms within GAB, identifying the differences in the attached and suspended biofilm-forming mechanisms is critical (Sauer et al. 2022 ). However, there is a limited number of pure or co-culture studies of obligately anaerobic microorganisms (both bacteria and archaea) forming either attached or suspended biofilms (Krumholz et al. 2015 ; Cong et al. 2021 ), which could be useful models to start understanding the molecular mechanisms of GAB formation. Moreover, limited knowledge on QS-mediated biofilm formation in Archaea (Fröls 2013 ; Orell et al. 2013 ; Charlesworth et al. 2020 ) (that can contribute up to 50% by relative abundance to the GAB microbial population) makes molecular understanding of the GAB formation mechanisms even more challenging. Latest studies of QS-associated genes in anaerobic consortia support a correlation for the presence of QS-based interactions between fatty-acid oxidizing bacteria and methanogenic archaea (Yin et al. 2020 ). However, experimental evidence for these interactions is still lacking.
Here, we present a first attempt to fill in this knowledge gap by looking into the molecular mechanisms behind formation of granular biofilms in the co-cultures of fatty-acid oxidizing bacteria and methanogenic archaea. Co-cultures comprised of propionate-oxidizing Syntrophobacterium fumaroxidans (formerly Syntrophobacter fumaroxidans ) and hydrogenotrophic methanogens, Methanobacterium formicicum or Methanospirillum hungatei, were maintained for a year in a fed-batch mode to promote aggregate formation. QS signalling molecules, such as N-acyl-homoserine lactones, were identified in the supernatants from the aggregating co-cultures, but were no longer present soon after the mm-scale aggregates matured. Differential gene expression analyses of co-cultures of S. fumaroxidans and methanogens in the early- and late-aggregation states pointed to aggregation-associated biochemical changes in the activity of all three microorganisms, especially with regard to the expression of genes in the polysaccharide production and signal transduction systems. | Materials and methods
Cultivation of microorganisms
Pure cultures of Methanospirillum hungatei strain JF1 (DSM 864 T ), Methanobacterium formicicum strain MF (DSM 1535) and Syntrophobacterium fumaroxidans (formerly Syntrophobacter fumaroxidans strain MPOB, DSM 10017 T ) were obtained from the German Collection of Microorganisms and Cell Culture (DSMZ, Braunschweig, Germany). Microbial cultivations were performed in bicarbonate-buffered mineral salt medium prepared as described previously (Stams et al. 1993 ). Boiled and N 2 -flushed medium (46 ml) was dispensed into 117 ml serum vials, which were sealed with butyl rubber septa and crimped with aluminium caps. Vials’ headspace was flushed with 80/20 (v/v) H 2 /CO 2 (for growing pure cultures of methanogens) or 80/20 (v/v) N 2 /CO 2 (for syntrophic co-cultures and pure S. fumaroxidans MPOB cultures) and finally pressurized to 1.5 bar. Medium was autoclaved at 120°C for 20 min. Before inoculation medium was supplemented with filter-sterilized vitamins solution (Stams et al. 1993 ) and reduced with Na 2 S.xH 2 O ( x =9–11 mol) (added to a final concentration of ~ 1mM). Pure cultures of S. fumaroxidans were grown on 20 mM sodium propionate as an electron donor and 15 mM sodium sulfate as an electron acceptor. Co-cultures of S. fumaroxidans and methanogens were grown on 20 mM sodium propionate alone.
Syntrophic co-cultures were constructed by mixing 10% v/v inoculum of the pre-grown pure culture of S. fumaroxidans and 10% (v/v) of one of the pre-grown pure cultures of methanogens. All co-cultures were prepared in triplicates. Pure cultures of methanogens were incubated on a shaker platform (180 rpm), while co-cultures and pure cultures of S. fumaroxidans were not shaken during incubation. Cultivations of pure cultures were done by transferring 10% (v/v) of an actively growing culture into a bottle with fresh medium. All cultures grew at 37°C in the dark.
Aggregate formation experiment
Aggregate formation experiments were performed in 250-mL serum bottles with 100 mL mineral medium. Bottles were inoculated with the 10% (v/v) of exponentially growing cultures of S. fumaroxidans and M. formicicum or M. hungatei (co-cultures Sf-Mf and Sf-Mh) in triplicate. Co-cultures were maintained in a fed-batch mode, by routinely exchanging 50% of the medium with fresh medium containing 40 mM of propionate (to achieve 20 mM propionate in the cultivation media) (Figure S 1 ). Media replacement was done when propionate concentration in the medium reached about 3–4 mM. Since co-cultures were grown non-shaking, each medium exchange was done as carefully as possible removing the top liquid part of the serum bottle with a syringe, while trying not to disturb any precipitated/aggregated cells at the bottom of the bottle. Each medium exchange denoted a new “Cycle” in the cultivation, allowing to track the number of nearly complete growth cycles of the co-cultures. When pure cultures of S. fumaroxidans and each of the methanogens were first mixed in the same bottle (10% v/v of each co-culture member), “Cycle 0” commenced. Transfer of 10% of the grown together co-cultures from Cycle 0 to the new bottle with fresh medium and 20 mM propionate commenced “Cycle 1” (also referred to as “early-aggregation state co-cultures”). The first 50% medium exchange in the Cycle 1 bottle at the 80–90% of propionate consumption denotes the start of “Cycle 2.” The scheme of exchanging 50% of media and numbering of progressions is then maintained. Cultures purity was routinely checked with light microscopy.
Extraction of RNA
Total RNA extractions were done from the cells of triplicates of the early- (Cycle 1) and late-aggregation state (Cycles-20/23) co-cultures and pure cultures of S. fumaroxidans and two methanogens in the mid-exponential growth (~ 10 mM propionate or 70% (v/v) H 2 consumed). Cell pellets were harvested from 110 mL of each culture by centrifugation at 10,000g at 4 °C (Sorvall Legend XTR, Thermo Fisher, Waltham, MA) under sterile conditions, washed in the sterile cold TE buffer, snap-frozen in liquid nitrogen and stored at −70 °C. Lysis and protein precipitation was performed using the solutions and enzymes from MasterPureTM Gram Positive DNA Purification Kit (Bioresearch Technologies, UK). Briefly, lysozyme (1 μL) incubation was done at 37°C for 20 min, followed by addition of 4 uL of β-mercaptoethanol, sonication using Bendelin SONOPULS HD 3200 ultrasonic homogenizer (6 cycles of 20 s pulse 30 s pause) and proteinase K incubation at 60°C for 15 min. Protein precipitation was performed according to the kit specifications. Automated RNA purification was performed using Maxwell® 16 MDx instrument and LEV simplyRNA Purification Kit (Promega, Madison, WI). RNA was sequenced at Novogene (Novogene, UK) on the NovaSeq6000 (Illumina, San Diego, CA), yielding 150 bp paired end reads.
Transcriptomics
Obtained transcriptome reads were trimmed by bbduk.sh of BBmap (v38.84) (ktrim=r, k=23, mink=7, hdist=1, tpe, tbo, qtrim=rl, trimq=30, ftm=5, maq=20, minlen=50, hdist=1), followed by fastQC quality check (v0.11.9). Three reference genomes were annotated with KEGG (Kanehisa et al. 2016 ), InterPro scan (Paysan-Lafosse et al. 2023 ) and Prokka v1.14.6 (Seemann 2014 ): S. fumaroxidans (RefSeq GCF_000014965.1), M. hungatei (RefSeq GCF_000013445.1) and M. formicicum (RefSeq GCF_029848115.1). Sequences of hydrogenases were additionally checked with conserved domain search (Lu et al. 2020 ) and HydDB (Søndergaard et al. 2016 ) for the specific catalytic subunits. All transcripts were mapped against all three reference genomes using bbsplit.sh and mapped genes were counted using samtools view (version 1.10) (-SF 260, cut -f 3). Mapped counts were further analysed in Rstudio (v4.0.2), separately for each organism. The Rmd script used for data analysis is supplied (Supplementary file 1 ). To determine gene expression ranking within each of the two conditions, counts were normalized to Transcripts Per Million (TPM). The Bioconductor package DESeq2 v1.30.16 (Love et al. 2014 ) was used for additional normalization to allow differential expression analysis between the samples. To determine significant differential expression, we considered a fold-change ≥ 1.5, p -value adjusted by the Benjamini and Hochberg method to ≤ 0.05 (Benjamini and Hochberg 1995 ). Data visualization was performed in R, using DESeq2 normalized counts. Raw reads were deposited to the European Nucleotide Archive with the study accession number ERP148018.
Scanning electron microscopy (SEM)
Stationary phase grown aggregated co-cultures from Cycles 30/34 were subject to scanning electron microscopy. Centrifugation of the samples was avoided during sampling to prevent unnatural clumping of microbial cells and thus, interference with detection of any aggregates. Aliquots of culture samples (or single aggregates) were directly mounted on coverslips coated with Poly-L-Lysine (Corning BioCoat, Corning Life Sciences, Tewksbury, MA) and fixed with 3% (v/v) glutaraldehyde and 1% (v/v) OsO 4 for 1 h at room temperature. Next, samples were dehydrated in graded ethanol solutions in water (10, 30, 50, 70, 80, 90, 96, 100%) for 10 min each and critical point dried with liquid carbon dioxide using an automated critical point dryer Leica EM CPD300 (Leica, Wetzlar, Germany). Cells were studied with FEI Magellan 400 scanning electron microscope.
Fluorescent microscopy
Syto 16 (Invitrogen, Waltham, MA) was applied to stain DNA of all the cells in the samples. Aliquots of 200 μL were sampled from stationary phase grown aggregated co-cultures and washed with 1.5 mL PBS buffer (1.8 g L −1 Na 2 HPO 4 , 0.223 g L −1 NaH 2 PO 4 , 8.5 g L −1 NaCl, pH 7.2) by centrifuging for 3 min at 4000 g at 10°C. The staining was done on the pellets resuspended 1 mL PBS, by adding 40 μL of 96% ethanol and 1 μL of Syto16 (1 mM solution in DMSO). Samples were incubated for 3 h covered with aluminium foil, in the dark, at room temperature. Stained samples were visualized under the fluorescent microscope Nikon Eclipse Ti2 with 100× objective using green fluorescence filter (excitation 465/495 nm, emission 515/555 nm) to select for Syto 16 fluorescence and blue fluorescence filter (excitation 340/380 nm, emission 435/485 nm) to select for the autofluorescence of cofactor F 420 of the methanogenic cells. Two fluorescent signals were overlayed in the final image processed in NIS Elements AR (version 5.21.03 64-bit).
Analytical methods
Gaseous compounds (H 2 , CH 4 ) were analysed by gas chromatography on a CompactGC 4.0 (Interscience, Breda, Netherlands) equipped with a thermal conductivity detector. Argon gas was used as a carrier gas at a flow rate of 1 ml min −1 . Gas sample (0.2 ml) was injected onto a Carboxen 1010 column (3 m × 0.32 mm) followed by a Molsieve 5A column (30 m × 0.32 mm). The temperatures in the injector, column and detector were 100°C, 140°C and 110°C, respectively. The limit for H 2 detection was 0.01% v/v (0.006mM). Organic acids (formate, acetate, propionate) were quantified using a Shimadzu HPLC (Kyoto, Japan), equipped with a Shodex column (SH-1011), and UV/RID detectors. A flow rate of 1 ml min −1 was used with sulphuric acid (0.01 N) as mobile phase and column temperature set at 45 °C. The limits of quantification were 0.2–0.5 mM for all three organic acids.
Extraction of N-acyl homoserine lactones (AHLs)
Cultures’ supernatants were collected at the late exponential growth phase at fed-batch Cycles 1 to 33 by centrifugation at 10,000 × g for 20 min at 4°C (Sorvall Legend XTR, Thermo Fisher, Waltham, MA) in sterile 50 mL (Greiner, Germany). AHLs were extracted twice from the supernatants by adding equal volumes of ethyl acetate-acetone mixture (4:1, v/v) to a conical flask containing the supernatant, which was then sealed with parafilm and shaken at room temperature at 180 rpm for 1 h. After that, supernatant-extractant mixture was transferred to a separation funnel, and the upper organic phase was collected. Bottom aqueous phase was subject to a second extraction. Pooled extracts from two extractions were dehydrated by passing through an anhydrous NaSO 4 -packed syringe column. Flowthroughs were filtered through a 0.45-μm hydrophilic PTFE syringe filter (BGB, China) and dried on the SpeedVac (Eppendorf, Germany) with high-vacuum settings at 30°C. Dried extracts were stored at −20°C.
AHL separation with UHPLC
Dried extracts were resuspended in 3mL of HPLC-grade acetonitrile (Sigma Aldrich, St. Louis, MO), and 2 μL was injected on a Vanquish Horizon UHPLC system (Thermo Scientific, San Jose, CA). The autosampler temperature was set at 10 °C. ACN was used as the strong needle wash before and after injection. For separation, an Acquity BEH-C18 column (150 × 2.1 mm, Waters, Milford, MA) with BEH-C18 VanGuard guard column (5 × 2.1 mm, Waters, Milford, MA) were used. Gradient elution was done using mobile phase A (premix 0.1% formic acid in UHPLC-grade water; Biosolve, Valkenswaard, The Netherlands) and mobile phase B (premix 0.1% formic acid in acetonitrile; Biosolve, Valkenswaard, The Netherlands). The flow rate of mobile phases was 0.4 mL × min −1 . The gradient settings were: 0.00–1.10 min isocratic on 5% B; 1.10–24.88 min linear gradient from 5%B to 70 %B; 24.88–25.97 min linear gradient from 70 % B to 5 % B; and 25.97–31.50 min isocratic on 5% B. The column temperature was set at 45°C and the post-column cooler at 40°C.
AHL detection with mass spectrometry
AHLs were analysed on a Thermo Q Exactive Focus hybrid quadrupole-orbitrap mass spectrometer (Thermo Scientific, San Jose, CA) equipped with a heated ESI probe (ESI-FTMS). Before analysis, the orbitrap was calibrated in positive ionization mode using Tune 2.11 (Thermo Scientific, San Jose, CA) by injection of Pierce positive ion calibration solutions (Thermo Scientific, San Jose, CA). The parameters of the positive ion mode were as follows: Nitrogen was used as a sheath gas (50 arbitrary units), auxiliary gas (13 arbitrary units) and sweep gas (1 arbitrary unit). FullMS data in the mass in the mass range 170–450 (m/z) was recorded using 70.000 FWHM mass resolution. The source voltages were set at 3.5 kV, the S-lens RF level was set at 50%, the capillary temperature at 263 °C and the auxiliary gas heater temperature at 425 °C. Data acquisition and processing were performed using Xcalibur version 4.3 (Thermo Scientific, San Jose, CA).
Standards of seven AHLs (Table S 1 ), namely N-Butyryl-DL-homoserine lactone (C4-HSL), N-Hexanoyl-L-homoserine lactone (C6-HSL), N-(Ketocaproyl)-d,l-homoserine lactone (3-oxo-C6-HSL), N-Heptanoyl-DL-homoserine lactone (C7-HSL), N-Octanoyl-L-homoserine lactone (C8-HSL), N-(3-Oxodecanoyl)-L-homoserine lactone (3-oxo-C10-HSL) and N-Dodecanoyl-L-homoserine lactone (C12-HSL), were purchased from Sigma-Aldrich (St. Louis, MO) and dissolved in HPLC-grade acetonitrile to prepare standard curves for the UHPLC-MS/MS. Serial dilutions were prepared for each standard ranging from 0.5 to 50 ng/mL. Standards of AHLs were tested twice during each UHPLC-MS/MS sequence: at the beginning and at the end of the sequence. Values from the peak areas averaged from the two injections of the same standard were used to build a standard curve. Standard curves with R 2 =0.99 of varying carboxyl chain length AHLs were prepared freshly before each analysis of the culture extracts on the UHPLC-MS/MS. The limit of detection (LOD) for tested AHLs was determined from the calibration curves by applying root mean square error (RMSE) (Andini et al. 2020 ). The limit of quantification (LOQ) was calculated as 3 times the value of LOD. The LOD varied slightly between MS/MS runs and ranged within 0.2–0.5 ng/mL for different molecules in selective ion mode (SIM) (Table S 2 , Figure S 2 ). Mass filter used for quantification in SIM mode was set to 5ppm. For validation of the AHL extraction, recovery and identification procedures, the sterile bicarbonate-buffered mineral salt medium was spiked with synthetic C4-HSL (5 ng/μL) and processed as a separate sample alongside the co-culture supernatants. Recovered amount of the C4-HSL was then calculated from the MS/MS chromatograms by comparing the areas of the peaks to the areas in the standard curves of the C4-HSL from the same UHPLC-MS/MS sequence. | Results
Physiological and morphological analysis of early- and late-aggregation state co-cultures
In this study, we successfully achieved aggregation of two syntrophic propionate-oxidizing microbial co-cultures comprised of (1) Syntrophobacterium fumaroxidans with Methanobacterium formicicum (noted Sf-Mf) and (2) Syntrophobacterium fumaroxidans with Methanospirillum hungatei (noted Sf-Mh). In both syntrophic co-cultures, S. fumaroxidans performed the oxidation of propionate to hydrogen and acetate, while methanogens converted hydrogen and CO 2 into methane (Fig. 1 ). Both methanogens, M. formicicum and M. hungatei , are also known to be able to use formate as an electron donor, in addition to hydrogen (Schauer et al. 1982 ). While acetate and methane were produced in all the co-cultures (Fig. 1 ), neither formate nor hydrogen was above the instrumental sensitivity level of 0.5 mM (formate) or 0.006 mM (hydrogen) at any point of co-culture growth. The consistent production of methane, without significant lag phase, demonstrates the efficient syntrophic relationship in the co-cultures studied here.
Pure cultures of S. fumaroxidans and M. hungatei did not aggregate at any stage of their growth, while pure cultures of M. formicicum self-aggregated in the early stationary phase (Figure S 3 ). Meanwhile, Sf-Mf co-cultures formed small aggregates (less than 10 μm in diameter), which were visible under the 100× magnification in the light microscope, already at Cycle 0. However, they were not visible to the naked eye until Cycle 8 (Figure S 3 ). Sf-Mh co-cultures produced visible aggregates later, at Cycle 13. Overall, it took 5 months for both co-cultures to produce mm-scale aggregates visible to the naked eye (Fig. 2 ). Using the same volumetric loading of the inoculum from early- and late-aggregation state co-cultures (Figure S 1 ), we saw that both Sf-Mf and Sf-Mh co-cultures had increased propionate oxidation rates in Cycle 45/51, compared to the co-cultures from Cycle 1: from 0.55 ± 0.03 to 0.70 ± 0.09 (mM propionate .day −1 ) for Sf-Mf and from 0.79 to 1.03 ± 0.12 (mM propionate .day −1 ) for Sf-Mh (Fig. 1 ).
Both aggregated Sf-Mf and Sf-Mh co-cultures were examined using scanning electron microscopy (SEM) and fluorescent microscopy at the Cycle 30/34 (Fig. 3 ). Overlaying autofluorescence of the methanogenic cofactor F 420 with the fluorescence signal from general DNA dye Syto16 allowed to pinpoint location of the two microbial species in co-cultures. This revealed that the two species are intertwined or closely associated within the clusters.
Identification and characterization of N-acyl-homoserine lactones in the syntrophic co-cultures
Supernatants from the methanogenic co-cultures, pure cultures of S. fumaroxidans and pure cultures of the two methanogens were collected at different aggregate formation stages to check for the presence of extracellularly secreted N-acyl-homoserine lactones (AHLs) (Materials and Methods, Figure S 4 , S 5 ). All cultures were sampled at the late exponential growth stages. The analysis of the supernatants from the pure cultures and co-cultures at different aggregation stages revealed the presence of varying quantities of four AHLs (Table 1 ). C7-HSL, C8-HSL and C12-HSL were found in most extracts, both in the pure cultures of M. formicicum and M. hungatei , and their respective co-cultures with the syntroph. C12-HSL was present in the highest quantities in some samples (Cycle 9, Sf-Mh co-cultures), although biological replicates of the extracts demonstrated a high variability in the amounts of the molecule (Table 1 ). 3-Oxo-C6-HSL was found in most of the samples but was below limit of quantification (less than 0.5 ng/mL), despite having a symmetric peak shape and a matching retention time to the 3-oxo-C6-HSL in the standards. We also observed in many chromatograms’ presence of the peaks with retention time and mass similar to C4-HSL: these peaks at low concentrations exhibited irregular and non-symmetric shape and were below limits of quantification. Therefore, we excluded analysis of C4-HSL in the co-culture supernatants.
Differential gene expression of the early- and late-aggregation state syntrophic co-cultures
For transcriptome analysis, total RNA was extracted from triplicates of both types of syntrophic co-cultures in the early- (Cycle 1) and late-aggregation states (Cycle 20 for Sf-Mf and Cycle 23 for Sf-Mh).
As a result of mapping the transcriptome reads to the correspondent microorganism, we observed a change in the ratio of the syntroph to methanogen during the co-cultivation and maturation of the aggregates (Table S 3 ). While transcripts affiliated with methanogens were prevalent in either of the co-cultures throughout the co-cultivation, it was especially pronounced in the early-aggregation state (almost 1:4 for syntroph:methanogen ratio). As the co-culture aggregates matured, the number of syntroph-affiliated transcripts increased, especially for the tight aggregates of Sf-Mf (1:1.5 ratio).
When comparing the four sets of transcriptomes from the methanogenic co-cultures, all triplicate samples clustered separately depending on the age (Figure S 6 ) and methanogen used for the co-culture (Figure S 7 ). We observed that expression of S. fumaroxidans genes was less affected by the methanogenic partner (5% statistically significantly differentially expressed genes) than by the aggregation state of the co-cultures (20% statistically significantly differentially expressed genes) (Figures S 8 –S 11 ).
The most prominent changes in the S. fumaroxidans gene expression that could be correlated with the methanogenic partner were observed in the late-aggregation state co-cultures. In Sf-Mh Cycle 23 co-cultures, S. fumaroxidans had a tenfold statistically significantly upregulated expression of the periplasmic formate dehydrogenase (Sfum_0035-0037) and an operon containing isoquinoline 1-oxidoreductase (Sfum_1729-1732), compared to the Sf-Mf co-cultures at Cycle 20. By comparison, in the Sf-Mf Cycle 20 co-cultures, S. fumaroxidans had a tenfold statistically significantly upregulated [FeFe] hydrogenase (Sfum_0843-0848), operons of vitamin B12 transporters (Sfum_0491-0495) and genes for the biosynthesis of tryptophan (Sfum_1771-1778), compared to the Sf-Mh Cycle 23 co-cultures (Supplementary data 1 ).
Gene expression of S. fumaroxidans in the early- and late-aggregation states in both Sf-Mh and Sf-Mf co-cultures revealed differences in propionate oxidation/methanogenesis metabolisms, metal and amino acid transport and signal transduction depending on the aggregate maturation state (Fig. 4 , Figure S 7 -S 15 and Supplementary data 1 ). Additionally, genes associated with chemotaxis, flagella and pili biosynthesis, production/secretion of polysaccharides and turn-over of quorum sensing-affiliated molecules were also differentially expressed. It is worth noting that a high number of the statistically significantly differentially expressed genes in late-aggregation state co-cultures have unknown function/classification. | Discussion
In this investigation of the aggregate-forming ability of syntrophic propionate-oxidizing co-cultures, we compare aggregate morphology and biochemical make-up of early- and late-aggregation state co-cultures of S. fumaroxidans and methanogens. Although there are episodical reports of aggregate formation in syntrophic propionate-oxidizing co-cultures (de Bok et al. 2002 ; Worm et al. 2011 ; Krumholz et al. 2015 ; Cong et al. 2021 ), no mechanistic understanding of the phenomenon exists. In this study, we monitored the formation of biofilm aggregates of S. fumaroxidans and M. formicicum or M. hungatei for over 1 year in fed-batch cultivation. Maturation of the co-cultures aggregates enriched for the syntrophic cells, as can be noted from the higher number of S. fumaroxidans -specific transcripts in each co-culture in the late-aggregation state (Cycle 20/23), compared to the early-aggregation state co-cultures (Cycle 1). Long-term adaptation to the growth in a syntrophic relationship and S. fumaroxidans enrichment in the Cycle 20/23 co-cultures can explain the improved (+30%) propionate oxidation rates compared to the early (Cycle 1) co-cultures. Below we discuss specific differences in the metabolism and gene expression of the two syntrophic co-cultures in early- and late-aggregation stages.
Propionate oxidation and methanogenesis metabolism
Long-term presence of either of the methanogenic partners with S. fumaroxidans led to similar gene expression levels in the main energy metabolism of the syntroph in both late-aggregation state co-cultures (Fig. 4 , Figure S 12 ). While genes required for the initial activation of the fatty acid in S. fumaroxidans (Sfum_3926-Sfum_3933) were statistically significantly upregulated in the late-aggregation state co-cultures with M. formicicum , the rest of the genes encoding enzymes for propionate oxidation via methyl-malonyl pathway were slightly upregulated in the early-aggregation stages with either of the methanogens, compared to the later aggregation stages. Similarly, most of the hydrogenases of S. fumaroxidans with either of the methanogens were also slightly upregulated in the early co-cultures, compared to the later ones (expression of the membrane-bound [NiFe] Fhl-h (Sfum_1791-1794), [NiFe] Hox (Sfum_2712-2716), [NiFe] Mvh1,2 (Sfum_3535-3537, Sfum_3954-3957)). However, expression of coenzyme F420-reducing hydrogenases (Sfum_2221-2224; Sfum_3954-3958) was statistically significantly higher in the late-aggregation state Sf-Mf co-cultures, compared to the aggregates from the Cycle 1. Regardless of the aggregate maturation state, S. fumaroxidans had a statistically significantly upregulated [FeFe] hydrogenase (Sfum_0843-0848) in the co-cultures with M. formicicum , compared to the co-cultures with M. hungatei (Supplementary Data 1 ).
The hydrogen production by the syntroph and its’ expression of hydrogenases (upregulated in the early-aggregation state co-cultures) matches the transcription patterns of the hydrogenases in both methanogens, most of which were statistically significantly upregulated in the early co-cultures, compared to the later ones (Fig. 4 , Figure S 13 , S 14 ). Overall, genes involved in hydrogenotrophic methanogenesis of both M. hungatei and M. formicicum , as well as archaeal acetate transporters, were upregulated in the late-aggregation state co-cultures. This might suggest an increased need for the carbon source (acetate) for methanogenic cell synthesis in the late aggregated co-cultures.
In both late-aggregation state co-cultures, methanogens also upregulated formate transporters and some formate dehydrogenases (Mhun_0075, Mhun_1811, Mform_01717-01720), while S. fumaroxidans upregulated tungsten/molybdenum-containing formate dehydrogenases FDH3, FDH4 (also FDH5 for co-cultures with M. hungatei ) and a formate transporter (Sfum_2707). Moreover, S. fumaroxidans had a significantly upregulated expression of periplasmic formate dehydrogenase (Sfum_0035-0037) in the late-aggregation state co-cultures with M. hungatei , compared to the co-cultures with M. formicicum (Supplementary Data 1 ). We hypothesize that upregulated formate transporters and formate dehydrogenases of M. formicicum and the syntroph in the late-aggregation state compared to the early-aggregation state could mean that formate transport by the passive diffusion through the cell membrane was not taking place anymore in the late-aggregation state co-cultures, and instead formate transporters were activated to pump formate inside (in case of methanogens) or outside (in case of the syntroph) of the cell. Closer proximity of the methanogenic and syntrophic cells in the aggregates potentially allows a faster formate exchange resulting in lower concentration outside of the cell, making it less favourable for passive diffusion to occur. However, we could not prove this is exactly the case since formate was below detection limit throughout the fed-batch cultivation (even during early cycles).
Flagella, type IV pili and chemotaxis
The strikingly high expression of S. fumaroxidans flagella, type IV pili and chemotaxis genes in the Cycle 1 co-cultures with either of the methanogens (Figure S 15 ) suggests the importance of swimming/swarming functions for the initial contact with methanogens. In addition to this, flagella can also play a role in cell adhesion (Craig et al. 2019 ). In bacteria and archaea, type IV pili were found to be important for cell-to-cell, cell-to-surface and even cell-to-substrate adhesion (Rakotoarivonina et al. 2002 ; Pohlschroder and Esquivel 2015 ; van Wolferen et al. 2018 ).
In both early-aggregation state Sf-Mh and Sf-Mf co-cultures, S. fumaroxidans had a significantly higher expression of the complete set of type IV pili-encoding genes (Figure S 15 ): pilBMNOQVWXY1Z , sensor histidine kinase PilS, response regulator PilR and twitching-mobility proteins PilT, MglA (Sfum_2553-2557, 1694, 1695, 0538-0540,0119, 2092). Lower expression of these genes in the late-aggregation state co-cultures might mean pili genes were no longer essential, as “first contact” with the methanogenic partner and cell-to-cell adhesion has already been established. Something similar has been reported in the studies on surface attachment in Caulobacter crescentus , where irreversible cell adhesion to the surface resulted in a significantly repressed activity of pili (Ellison et al. 2017 ). Interestingly, the major pilin protein PilE of S. fumaroxidans (Sfum_0126), which was twice more upregulated in the Cycel 1 Sf-Mf co-cultures, has a 64% homology to the geopilin of Geobacter sulfurreducens PCA (previously recorded as pilA , NCBI accession AAR34870.1). Geopilin of Geobacter was shown to be crucial for the cells’ attachment to insoluble Fe (III) oxides and contributed to the extracellular electron transfer (Reguera et al. 2005 ). Higher differential expression of the gene encoding geopilin (GSU1496) led to a tenfold improved Fe (III) oxide reduction in laboratory-evolved Geobacter strains (Tremblay et al. 2011 ). Whether PilE of S. fumaroxidans plays a crucial role in the attachment of the syntroph to the methanogenic cells remains to be tested.
The genome of M. formicicum has an incomplete archaella operon, comprised of flaIJ (Mform_00687-00688) and flaK with a hypothetical flagellin-encoding gene, with amino acid homology to a class III signal peptide (Mform_01543-01544). In other Euryarchaeota , FlaK regulates flagellar assembly and FlaI is predicted to have a membrane-spanning C-terminal domain that may interact with the membrane-bound FlaJ (Ghosh and Albers 2011 ). While flaIJ and flaK of M. formicicum were actively transcribed in both early- and late-aggregation state Sf-Mf, the hypothetical flagellin-encoding gene in the operon with flaK was significantly upregulated in the early-aggregation state Sf-Mf co-cultures (Figure S 16 ). In Sf-Mh co-cultures, M. hungatei also had an actively transcribed complete operon of archaella genes flaFGHIJ (Mhun_0101-0105) and separately encoded three flagellins FlaB (Mhun_1238, Mhun_3139-3140). In early-aggregation state Sf-Mh co-cultures, M. hungatei had an upregulated expression flaF (Figure S 17 ), which anchors the archaellum in the cell envelope to the archaeal S-layer (Banerjee et al. 2015). In the late-aggregation state co-cultures, on the contrary, the two of archaeal flagellins flaB (Mhun_3139-3140) were upregulated, with Mhun_3140 being among the 15 most expressed genes in Sf-Mh co-cultures (Figure S 17 , Supplementary Data 1 ). Recent studies have provided evidence that archaellum of M. hungatei (comprised of flaB gene product of Mhun_3140) has conductive properties (Poweleit et al. 2016 ; Walker et al. 2019 ). It is thus possible that aggregation of M. hungatei with the syntroph facilitates extracellular or direct electron exchange via archaeal conductive appendages.
Numerous chemotaxis genes of M. hungatei (62 in total) were also differentially expressed in either late- or early-aggregation state co-cultures (Figure S 18 ). However, since multiple genes encoded for the homologous proteins, it is hard to evaluate the specific function they have in the two co-culture aggregation stages. The only currently known chemotactic attraction of M. hungatei towards acetate (Migas et al. 1989 ) would be indeed relevant to both aggregation stages, since acetate was present in high quantities throughout the fed-batch cultivation (Fig. 1 ). For example, after Cycle 4, both methanogenic co-cultures had 16–18 mM acetate at the start of each cycle and 30–35 mM of acetate at the end of the cycle. Thus, it may well be possible that chemotaxis genes were in general highly expressed in M. hungatei , regardless of the aggregation progress with S. fumaroxidans , albeit the actual function is performed by homologous proteins. Chemotaxis-associated operon containing cheBYW and methyl-accepting proteins in S. fumaroxidans (Sfum_1645-1652) were also among the highest differentially expressed genes between the two aggregate maturation stages (Figure S 15 ).
Expression of adhesins and polysaccharide-associated genes
We identified eight adhesins of S. fumaroxidans which were upregulated in the late-aggregation state co-cultures (Figure S 15 ). Specifically, genes for fibronectin type III proteins (Sfum_0949, Sfum_2299, Sfum_1109, Sfum_0329) and putative outer membrane adhesin-like proteins (Sfum_2357, Sfum_2359, Sfum_2362) were two- to fourfold higher expressed in the later Sf-Mf and Sf-Mh co-cultures, compared to the early-aggregation state co-cultures. Adhesins and adhesive glycoproteins like fibronectin are large repetitive proteins important for cell-to-cell adhesion and expansion of the bacterial EPS matrix. Deletions of these genes in Salmonella enterica resulted in the bacterial phenotypes that are unable to form biofilms (Latasa et al. 2005 ). It is curious that adhesins of S. fumaroxidans were upregulated in the late-aggregation state co-cultures and not in the early ones, where adhesion and establishment of the “first contact” between the cells might be of the primary importance.
The syntroph also had two distinct polysaccharide biosynthesis/export operons that were differentially expressed in either early- (Sfum_2182-2190) or late-aggregation state (Sfum_0972-0979) Sf-Mf and Sf-Mh co-cultures (Figure S 15 ). Operon (Sfum_2182-2190) had several glycosyltransferases, porin and a GDP-L-fucose synthase, while operon Sfum_0972-0979 contained another set of polysaccharide biosynthesis/export proteins (with Cps/CapB family tyrosine kinase involved in the biosynthesis of capsular polysaccharide).
Transcriptome of M. hungatei revealed a full pathway for the biosynthesis of the production of UDP-N-acetyl-D-glucosamine from α-D-glucose 6-phosphate (Mhun_2600, Mhun_2852-2855) (Figure S 17 ). All these genes (except glmM , phosphoglucosamine mutase, Mhun_2852) were upregulated in the Cycle 1 Sf-Mh co-cultures, compared to the Cycle-23 co-cultures. Instead, the Cycle 23 Sf-Mh co-cultures, M. hungatei significantly upregulated expression of the major sheath protein MspA (Mhun_2271), which was the third most highly expressed gene in this condition (Figure S 14 , Supplementary Data 1 ).
EPS-associated genes of M. formicicum , like genes for the biosynthesis of GDP-D-rhamnose, were mostly upregulated in the late-aggregation state Sf-Mf co-cultures (Figure S 16 ). Notably, in Cycle 20 Sf-Mf aggregates M. formicicum had a fourfold higher expression of a large OmcB-like cysteine-rich periplasmic protein with conserved DUF11 domain (Mform_01534, Figure S 13 ). This protein is hypothesized to play a key role as a membrane-bound adhesion protein involved in maintaining cell aggregates (Sumikawa et al. 2019 ).
Cell signalling
Since microbial aggregation and biofilm formation is a coordinated microbial process, it was not surprising to see a high expression of the genes involved in the intra- and intercellular signalling (Figure S 15 , S 16 , S 17 ). Specifically, Sf-Mh and Sf-Mf co-cultures had differentially expressed S. fumaroxidans genes involved in the cycling of intracellular signalling molecule bis-(30–50)-cyclic dimeric guanosine monophosphate (c-di-GMP) (Figure S 15 ). While most of the genes potentially involved in the sensing and synthesis of c-di-GMP (Sfum_0719, Sfum_1516, Sfum_1918, Sfum_2650, Sfum_1020, Sfum_2209) were upregulated in the early-aggregation state co-cultures, a few diguanylate cyclases (Sfum_0975, Sfum_3257) and associated regulatory sensory histidine kinases in polysaccharide biosynthesis operons (Sfum_2621-2622) were upregulated in the late-aggregation state co-cultures (Figure S 15 ). Similar expression pattern for c-di-GMP-associated genes was previously reported for another sulfate reducer, Desulfovibrio vulgaris Hildenborough, where expression of diguanylate cyclases was found to be essential for D. vulgaris Hildenborough optimal growth and biofilm forming capability (Rajeev et al. 2014 ). High concentrations of intracellular c-di-GMP can promote biofilm formation by inducing the synthesis of exopolysaccharides and adhesins, while inhibiting motility and activity of flagella (Hengge 2009 ; Rajeev et al. 2014 ). Unfortunately, we are currently lacking the measurements of the intracellular concentrations of c-di-GMP in the Sf-Mf and Sf-Mh co-cultures to explicitly correlate the observed gene expression profiles with the co-culture aggregation status.
We were, however, able to detect varying extracellular concentrations of the signalling molecules involved in the intercellular signalling, AHLs, during the year of Sf-Mf and Sf-Mh fed-batch cultivation (Table 1 ). The concentrations of the C8- and C12-HSL in the co-cultures supernatants fit well into the range reported for the extracts from pure culture of Methanosaeta harundinacea 6Ac (Zhang et al. 2012 ) where concentrations of a β-ketooctanoyl-L-homoserine lactone homologue ranged from 24 ng/mL to 6.5 μg/mL. In Sf-Mf and Sf-Mh co-cultures, the three AHLs (C7-, C8- and C12-HSL) were only detectable in the later cultivation stages (Cycle 6 in Table 1 ). It is possible, however, that the observed here concentrations are sufficiently high to activate the bacterial adhesion or biofilm formation, as they were above the stimulatory 10 ng/mL threshold reported for the aggregation of the microorganisms in activated sludge (Wang et al. 2021 ). These three AHLs were below detection limit in the early co-cultures (Cycle 1) or in the co-cultures that already had macroaggregates (after Cycle 13, Table 1 ). On the contrary, 3-oxo-C6-HSL was detectable throughout the development of the aggregates but was always below limit of quantification (Table 1 , Table S 2 ).
Contrary to the case of intracellular signalling and identification of genes related to cycling of intracellular c-di-GMP, search for the genes with intercellular AHL sensing/synthesis domains was not trivial. We were not able to find any genes homologous to the known AHL synthases (Rosemeyer et al. 1998 ) in either of the three studied here microorganisms. A single report exists to date describing the ability of a methanogen, M. harundinacea 6Ac, to synthesize a QS molecule structurally similar to C12-HSL (N-carboxyl-C 12 -HSL) (Zhang et al. 2012 ). The genomes of all three microorganisms studied here possess multiple homologs to the novel identified AHL synthase in M. harundinacea 6Ac, currently annotated as PAS domain S-box proteins, with up to 30% protein identity. However, none of those PAS domain S-box proteins are homologous to the LuxI autoinducer synthase of Erwinia chrysanthemi (GenBank accession number AAM46699), which was used to probe for the AHL synthase in M. harundinacea 6Ac study.
We did identify genes needed for sensing of the extracellular AHL and autoinducer-2 (AI-2) in S. fumaroxidans and M. formicicum . In both Cycle 1 Sf-Mf and Sf-Mh co-cultures, S. fumaroxidans had an upregulated expression of an AI-2 receptor (3-hydroxy-5-phosphonooxypentane-2,4-dione thiolase, lsrF, Sfum_2464) that can be involved in the degradation of phosphor-AI-2 molecule. To contrast, genes for the AHL hydrolysis ( ahlD , Sfum_3579) were slightly upregulated in the late-aggregation state co-cultures (1.5–2-fold, Figure S 15 ). An AHL hydrolase, homologous to the known bacterial ones, was also found in M. formicicum (Mform_01488) but was also only slightly upregulated in the late-aggregation state Sf-Mf co-culture (less than 1.5-fold, Figure S 16 ). Therefore, it is possible that decreased levels of AHLs reported in Table 1 might be explained by the activity of these AHL hydrolases (Mform_01488 and Sfum_3579) in the late-aggregation state Sf-Mf co-cultures.
Apart from genes associated with the turn-over of the known signalling molecules like AHLs and c-di-GMPs, all three microorganisms studied here possess numerous poorly characterized genes with predicted cell signalling-associated functions. Both methanogens had highly upregulated peptide transporter systems and “autotransporter porins” in the late-aggregation state co-cultures (“Secretion” set for M. formicicum on Figure S 16 , “Signal transduction” set for M. hungatei on Figure S 18 ). These proteins might be involved in the extracellular exchange of information, potentially cross-kingdom (Li et al. 2018 ). Porins of M. formicicum had a conserved DUF11 domain (Mform_00118, Mform_01307, Mform_01534, Mform_02114), hypothesized to be a part of the archaeal S-layer structure. DUF11-containing proteins were previously reported to play a key role in stabilizing the cell aggregates in Methanothermobacter sp. CaT2 (Sumikawa et al. 2019 ). Considering that pure cultures of M. formicicum tend to self-aggregate, we can hypothesize that DUF11 containing proteins in this microorganism might be also involved in the stabilization of the aggregating cells. This hypothesis is plausible since the most differentially and highly expressed protein of M. formicicum in the late-aggregation state Sf-Mf co-cultures, Mform_01534, has a 41% protein identity to the aggregation-defining protein of Methanothermobacter sp. CaT2 (GenBank WP_158498096.1).
Transcriptome of S. fumaroxidans had a few poorly characterized signalling genes, like outer membrane adhesin-like proteins Sfum_2357 and Sfum_2359, which were upregulated in the later aggregated co-cultures and might be involved in the recognition of the syntroph by a partner methanogen (Figure S 15 ). However, amino acid sequences of these genes had only a low homology (15%) to the PilD of another syntroph, Pelotomaculum thermopropionicum , where it was reported to directly activate expression of methanogenesis genes and hydrogenases in the methanogenic partner Methanothermobacter thermautotrophicus (Shimoyama et al. 2009 ). Apart from these genes, S. fumaroxidans had a few other genes potentially needed for the partner recognition and characterized to produce cell surface glycoprotein-containing proteins (Sfum_0949, 0329, 0804). All three were highly upregulated and differentially expressed in both late-aggregation state Sf-Mf and Sf-Mh co-cultures. However, more experimental evidence is needed to pinpoint the exact role of these signal transduction/receiving proteins.
With this morphological and biochemical study of the syntrophic aggregates, we set the stage for the exploration of this heavily understudied field of suspended methanogenic biofilms. Knowing how methanogenic communities switch and adapt to the attached/aggregated lifestyle greatly improves our fundamental understanding of these ubiquitous microbial communities that live at the lowest thermodynamic energy limits. | Abstract
For several decades, the formation of microbial self-aggregates, known as granules, has been extensively documented in the context of anaerobic digestion. However, current understanding of the underlying microbial-associated mechanisms responsible for this phenomenon remains limited. This study examined morphological and biochemical changes associated with cell aggregation in model co-cultures of the syntrophic propionate oxidizing bacterium Syntrophobacterium fumaroxidans and hydrogenotrophic methanogens, Methanospirillum hungatei or Methanobacterium formicicum . Formerly, we observed that when syntrophs grow for long periods with methanogens, cultures tend to form aggregates visible to the eye. In this study, we maintained syntrophic co-cultures of S. fumaroxidans with either M. hungatei or M. formicicum for a year in a fed-batch growth mode to stimulate aggregation. Millimeter-scale aggregates were observed in both co-cultures within the first 5 months of cultivation. In addition, we detected quorum sensing molecules, specifically N-acyl homoserine lactones, in co-culture supernatants preceding the formation of macro-aggregates (with diameter of more than 20 μm). Comparative transcriptomics revealed higher expression of genes related to signal transduction, polysaccharide secretion and metal transporters in the late-aggregation state co-cultures, compared to the initial ones. This is the first study to report in detail both biochemical and physiological changes associated with the aggregate formation in syntrophic methanogenic co-cultures.
Keypoints
• Syntrophic co-cultures formed mm-scale aggregates within 5 months of fed-batch cultivation.
• N-acyl homoserine lactones were detected during the formation of aggregates.
• Aggregated co-cultures exhibited upregulated expression of adhesins- and polysaccharide-associated genes.
Graphical abstract
Supplementary Information
The online version contains supplementary material available at 10.1007/s00253-023-12955-w.
Keywords | Supplementary information
| Acknowledgements
Authors would like to thank Dr Caroline Plugge for the insightful discussions on the formation of aggregates in the syntrophic co-cultures.
Author contribution
AD and DS conceived and designed research. AD conducted experiments. MB and MS contributed in establishing the analytical tools. AD, MB and MS analysed data. AD wrote the manuscript (lead) with contributions/revisions from all authors. All authors read and approved the manuscript.
Funding
This study was funded by the Netherlands Ministry of Education, Culture and Science (SIAM Gravitation Grant 0.24.002.002), the Dutch Research Council (NWO) grant number OCENW.XS21.4.067 and Wageningen Graduate Schools Postdoc Talent Grant attributed to A.D.
Data availability
All data supporting the findings of this study are available within the paper and its two Supplementary Information files. Processed and normalized RNA-seq data is available in the Supplementary file 2 (spreadsheet).
Declarations
Ethical approval
This article does not contain any studies with human or animal participants performed by any of the authors.
Conflict of interest
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:53 | Appl Microbiol Biotechnol. 2024 Jan 13; 108(1):1-15 | oa_package/77/2b/PMC10787695.tar.gz |
|
PMC10787696 | 38217830 | Introduction
Prostate cancer is the most common ly diagnosed tumor in men worldwide, with the highest incidence in Northern Europe [ 1 ]. However, mortality rates do not align with incidence rates, thanks to its early diagnosis and treatment in most cases. For localized disease, radical prostatectomy is the recommended surgical treatment, regardless of the risk of tumor progression [ 2 ]. The robot-assisted laparoscopic approach is currently considered a reliable option for both oncological and functional outcomes [ 2 ].
According to the literature, robot-assisted laparoscopic radical prostatectomy (RARP) demonstrated better urinary continence [ 3 ] and potency rates [ 4 ] compared to open and laparoscopic approaches due to the high-definition of surgical plans and the ease of instrument manipulation provided by the robotic system. Although the standard (anterior) approach ensures complete recovery of continence in 96.5% of patients, a quarter of cases still complain of erectile dysfunction five years after surgery [ 5 ].
The Retzius-sparing (posterior) approach was proposed to preserve anterior structures, such as the Santorini plexus, endopelvic fascia, and puboprostatic ligaments. After evaluating the first 50 consecutive posterior RARP cases, a progressive improvement in outcomes was observed, resulting in a low rate of positive surgical margins (PSM) and good continence recovery [ 6 ]. However, recovery of satisfactory erectile function was reported in no more than 80% of patients one year after surgery.
Microscopic evaluation of non-nerve sparing radical prostatectomy specimens has shown that 20–25% of nerves is primarily located along the ventral circumference of the prostatic capsule [ 7 ]. Yet, Tewari et al. have described a tri-zonal neural architecture laterally to the bladder neck and seminal vesicles which includes the proximal neurovascular plate, the neurovascular bundle (NVB), and the accessory neural pathways [ 8 ]. Therefore, a lateral approach might preserve tissue integrity to improve postoperative recovery of erectile function.
This study aims to assess the oncological and functional outcomes using a lateral approach in robotic-assisted radical prostatectomy (LRRP). | Methods
Data collection
A retrospective review of medical records of all patients who underwent LRRP between October 2019 and July 2021 was conducted. A single experienced robotic surgeon performed all procedures. Patients with a pathological diagnosis of prostate cancer with localized disease were included in this analysis [ 9 ].
The following demographic data and tumor characteristics were gathered: age, body mass index (BMI), Charlson comorbidity index (CCI), preoperative total serum prostate-specific antigen (PSA) level, prostate volume, biopsy Gleason score, and D’Amico risk group [ 10 ]. Intra- and perioperative data, such as operative time (OT), console time (CT), intraoperative blood loss (IBL), length of stay, postoperative complication within 30 days, specimen Gleason score, PSM, and pathological stage were also collected. Early complications (up to 30-day) were graded according to the Clavien-Dindo classification (CD) [ 11 ]. Follow-up visits with PSA measurement were scheduled at 1-, 3-, 6-, and 12 months following surgery. Recovery of full urinary continence was considered when the 24-hour pad weight test was zero [ 12 ]. The recovery of erectile function was defined complete in the presence of erections adequate for sexual intercourse with or without the use of a phosphodiesterase type 5 enzyme inhibitor.
Formal ethics committee approval was deemed unnecessary for this type of study in our center because retrospective data collection was obtained for clinical purposes, and all the procedures were performed as part of routine care. The study was conducted following the 1964 Helsinki Declaration and its later amendments. All patients signed an informed consent to gather their anonymized data.
Surgical technique
LRRP is performed using a four-arm da Vinci robot Xi (Intuitive Surgical, Sunnyvale, CA, USA) with the patient in a 30° Trendelenburg position.
The procedure starts with a sub-umbilical incision and creating pneumoperitoneum using a Veress needle. Trocars are then positioned according to a standard fashion: two robotic trocars on the left umbilical side, a robotic trocar on the right iliac fossa, and two 5-mm assistant trocars on the right umbilical side (Fig. 1 ). However, the position of the Prograsp and bipolar forceps is reversed to avoid mechanical conflicts during the procedure.
At beginning, the Retzius space must first be accessed, starting from the right side. A small incision is made in the peritoneum on the right side, starting from the right umbilical artery and continuing until the ipsilateral vas deferens (VD) is reached. Dissection proceeds until the endopelvic fascia, which is incised at the 2 o’clock position to avoid injuring the pericapsular nerve. The right periprostatic fat is then released from the anterior surface of the prostate, and a limited dissection on the left side allows the bladder to descent. Once the right lateral surface of the prostate becomes visible, the Prograsp forceps is used to gently pull the prostate towards the left side (Fig. 2 a).
Afterward, dissection carries on the lateral bladder neck until the right seminal vesicle (SV) is reached. Lateral prostatic pedicles are clipped using 5mm titanium clips. When feasible, the NVB is separated from the right lateral surface of the prostate, developing an intrafascial plane (Fig. 2 b). The right SV is then isolated laterally, allowing access to the plane between the posterior surface of the prostate and the Denonvillers fascia. The right VD is also cut.
The bladder fibers attached to the edge of the prostate are then peeled and pushed laterally. The bladder neck is fully preserved before being incised, and the vesical catheter is removed (Fig. 2 c). The posterior dissection of the prostate continues as much as possible cutting the left SV and VD. No diathermy coagulation is applied close to the NVB, and clips are applied to the left seminal pedicles (Fig. 2 d). The apex of the prostate is reached posteriorly.
The dissection of the anterior surface of the prostate continues until the left NVB is released up to the apex from the left side of the prostate. Complete liberation of the prostate is necessary for the section of the urethra (Fig. 2 e).
The maximal preservation of the urethra is mandatory to ensure postoperative urinary continence. Once the prostate dissection is finished, a 3-0 V-Loc suture is introduced to carry the tension of the anastomosis. The bladder opening is located on the left side, and a running suture is performed (Fig. 1 f).
Pressure on the perineum is applied to expose the urethral side, and the first stitch is placed at 3 o’clock, the second stitch is placed under the first one (at 5 o’clock), and the urethra-vesical anastomosis is completed at the 3 o’clock position. Finally, the anastomosis is tested by filling the bladder, and an ENDOPOUCH RETRIEVER ® bag (Ethicon Inc., Somerville, NJ, USA) is used to remove the specimen. No drain is positioned at the end of the surgery. The procedure can be viewed in the Supplementary video 1 .
Statistical analysis
The SPSS software package version 26.0 (IBM Corp., Armonk, NY) was used for all statistical tests. Quantitative variables were reported as median and interquartile ranges, while categorical ones were expressed as absolute frequencies and percentages. T-test and Pearson Chi-square test were performed to compare continuous and categorical variables, respectively. A p value <0.05 was considered statistically significant. | Results
Overall, the study included 70 patients. Table 1 summarizes the baseline data and demographic characteristics. The median age was 64 (61–68) years, and the median CCI was 6 (5–6,5).
Intraoperative, perioperative and pathology outcomes are reported in Table 2 . The median OT, CT, IBL, and length of stay were 102 (92–108) mins, 89 (78–96) mins, 150 (130–180) mL, and 2 (1–2) days, respectively. Five cases of early postoperative complications were reported, and all were CD 1. PSM occurred in 11 cases. Two patients underwent adjuvant radiotherapy due to persistent PSA dosage, and one case of biochemical recurrence occurred 12 months after surgery. No patients died of cancer during the follow-up period. Functional outcomes are reported in Table 3 . 81% of patents had full continence within six weeks from surgery, with increasing rates from 3, to 6, and until 12 months after surgery (89%, 91%, and 94%, respectively). One patient required placement of a urethral sling due to persistent stress incontinence. Erections satisfactory for intercourse were reported in 53% of cases at 6-week after surgery. 31 patients required PDE5 Inhibitor. Overall, erectile function rates exhibited a progressive increase, reaching 69% (48/70), 78% (55/70), and 84% (59/70) at 3, 6, and 12 months, respectively. | Discussion
Robotic surgery has gained acceptance and spread globally due to its ability to provide enhanced visualization and great precision in hard-to-reach areas. The magnification of the surgical field has allowed for the development of various techniques to preserve the periprostatic structures. Among them, our lateral approach appears helpful as it maximizes the preservation of ultrastructures that support the external urethral sphincter and nerves along the prostatic capsule. Moreover, the application of a single V-loc 3-0 suture demonstrated safety and effectiveness in vesicourethral anastomosis. Hence, it is noteworthy that the barbed suture proved to be non-inferior when compared to a continuous running suture comprising two 3-0 monocryl sutures tied together. This comparison revealed a minimal leakage rate of merely 1.4% within a comprehensive series of 2500 cases, as reported in previous research [ 13 ]. This observation aligns with the findings of another study conducted by Zorn et al., further corroborating the comparable security and efficacy of the aforementioned suturing techniques [ 14 ].
Despite the aim of preserving periprostatic structures to improve functional outcomes, achieving oncological radicality remains a paramount. The confined spaces may likely increase the incomplete dissection risk during some steps of LRRP. Our study showed a PSM rate of 15%, that is in line with the mean overall rate of 15.2% reported in a review including 16 studies [ 15 ]. Six PSM out of 8 PSM were found in patients with pT3 tumors, which are well-known to be associate with a high rate of PSM [ 16 ]. Therefore, we argue that LRRP can be considered a safe technique for satisfactory oncological outcomes.
Since the initial RARP description [ 17 ], caution was recommended when sparing NVB. Postoperative erectile dysfunction ranges from 14 to 90% [ 18 ], with age, preoperative erections, and CCI as the main factors affection erectile function recovery following surgery [ 19 ]. Our lateral approach involves a high endopelvic fascia incision to maximize nerve preservation. It is also utilized in select high-risk tumor cases due to lack of correlation between PSM rate and the nerve-sparing technique [ 20 ]. The findings of this study reveal a progressive improvement in erectile function recovery over the course of follow-up. In the initial postoperative months, approximately half of the patients received adjunctive medical therapy consisting of PDE5 inhibitors, which facilitate the intracellular accumulation of cGMP within the smooth muscle cells lining blood vessels. Furthermore, the drug’s multifaceted neuroregenerative properties have been validated by animal models, providing substantial evidence to endorse the idea that this pharmaceutical agent not only triggers neurogenesis but also fosters angiogenesis and synaptogenesis within peripheral nerves [ 21 ].
The recovery of urinary continence is another key factor to consider when assessing RARP outcomes. In addition to the preservation of the periprostatic tissue, several preoperative risk factors, such as age, preexisting lower urinary tract symptoms, BMI, and membranous urethral length, may also play an important role in functional outcomes [ 22 ]. The most well-known approach is the Retzius-sparing RARP, which aims to preserve the anterior support of the prostate. A recent meta-analysis found that the early recovery rate of urinary continence was higher with this technique than with the standard approach (RR = 1.74 and RR = 1.33 after one week and three months from surgery, respectively), although no difference was observed at 12 months (RR = 1.01) [ 23 ]. Recently, Ficarra et al. introduced an innovative urethral fixation technique involving a single suture securing the urethral wall to the medial dorsal raphe, positioned within the medial portion of the levator ani muscle, with a subsequent incision of the anterior wall of the urethra, and it is aimed at maintaining the urethral stump in its anatomically correct position [ 24 ]. This technique resulted in early recovery of urinary continence in approximately two-thirds of cases (68.6%).
Our technique involves anterior dissection of the prostate sparing the pubovesical ligaments and preserving the structures supporting the external urethral sphincter muscle and the original position of the urethra [ 25 ]. Additionally, our accurate intrafascial dissection of the prostate probably contributes to the recovery of urinary continence. A study by Kim et al. found that bilateral nerve-sparing RARP was independently associated with a 1-year postoperative continence return (OR = 3.671) [ 26 ]. Most of our patients regained continence after six weeks (81%), and 94% of the at 12-month. Therefore, our technique seems to be promising for gaining a full recovery of continence.
Limitations of the study
This study has some limitations. First, a significant constraint lies in its retrospective design. The absence of a comparative group hinders our ability to discern the impact of the intervention in question relative to standard RARP approaches.
Secondly, all procedures were performed by a single experienced surgeon; therefore, less skilled surgeons may not be able to achieve the same results particularly before completing their learning curve. Furthermore, the study’s results must be interpreted within the context of a limited sample size. The limited sample size presents a significant impediment in ascertaining after how many cases good outcomes in both oncological and functional outcomes can be achieved.
Consequently, it is recommended that a multicenter study should be performed to validate the findings presented in the present research. | Conclusion
Our study shows that our technique was feasible in experienced hands and associated with a low rate of early complications and PSM. Our lateral approach demonstrated similar rates of continence recovery compared to other techniques while showing promising results in erection recovery. These results suggest that LRRP can lead to satisfactory oncological and functional outcomes, provided that the surgeon skilled in the standard technique can adopt this approach to improve tissue integrity. | In the era of robotic prostate surgery, various techniques have been developed to improve functional outcomes. Urinary continence has shown satisfactory results, but the preservation of lateral nerves to the periprostatic capsule is only achievable by sparing the pubovesical complex. This study aims to present the first cases of lateral-approach robot-assisted radical prostatectomy (LRRP) performed by a novice surgeon. We conducted a retrospective analysis of 70 prostate cancer patients who underwent LRRP between October 2019 and September 2021, analyzing the perioperative and functional outcomes. The median operative time and intraoperative blood loss were 102 (92–108) minutes and 150 (130–180) mL, respectively. Five minor postoperative complications were reported, and the median hospital stay was 2 (1–2) days. Eleven positive surgical margins occurred. Potency and urinary continence recovery were achieved in 59 (84%) and 66 (94%) patients, respectively, 12 months after surgery. Our analysis shows that LRRP is a safe and effective procedure for prostate cancer surgery. Continence and potency recovery required a short learning curve, with an acceptable recovery rate even in the initial cases.
Supplementary Information
The online version contains supplementary material available at 10.1007/s11701-023-01772-y.
Keywords
Open access funding provided by Università Politecnica delle Marche within the CRUI-CARE Agreement. | Author contributions
CG: Methodology, Writing – original draft, Writing – review and editing. DC: Conceptualization, Methodology, Writing – review, and editing. NSV: Writing – review and editing. JR: Supervision, JPK: Writing – original draft. LHL: Writing – original draft TP: Writing – original draft. JBR: Writing – original draft. JR: Writing – original draft. JLH: Writing – review and editing. ABG: Conceptualization, Project administration, Writing – review and editing. RG: Conceptualization. GP: Writing – review and editing.
Funding
Open access funding provided by Università Politecnica delle Marche within the CRUI-CARE Agreement.
Data availability
The datasets used and analyzed during this study are available from the corresponding author upon reasonable request.
Declarations
Conflict of interest
The authors declare no competing interests.
Ethics statement
Formal ethics committee approval was deemed unnecessary for this type of study in our institute because retrospective data collection was obtained for clinical purposes, and all the procedures were performed as part of routine care. The study was conducted following the 1964 Helsinki declaration and its later amendments. | CC BY | no | 2024-01-15 23:41:53 | J Robot Surg. 2024 Jan 13; 18(1):24 | oa_package/1c/ea/PMC10787696.tar.gz |
|
PMC10787697 | 38064042 | Introduction
d -Glucaric acid ( d -saccharic acid) is a di-carboxylic acid that can be used for example to produce furan dicarboxylic acid (van Strien et al. 2020 ) or various polyamides, and polyesters (Sakuta and Nakamura 2019 ). Biotechnical conversion of d -glucose to d -glucaric acid (or to the conjugate salt d -glucarate) can provide a selective and less energy intensive alternative to chemical production processes (Zhang et al. 2021 ).
Moon et al. ( 2009 ) were the first to engineer Escherichia coli for production of d -glucaric acid. They introduced activities for myo-inositol-1-phosphate synthase, myo-inositol oxygenase and uronate dehydrogenase into E. coli for conversion of D-glucose via glucose-6-phosphate and 1L-myo-inositol-1-phosphate (1 d -myo-inositol 3-phosphate) to myo-inositol, d -glucuronate and finally to d -glucaric acid, resulting in production of 1.1 g L −1 of d -glucaric acid (Table 1 ). By introducing a polypeptide scaffold to co-localize the pathway enzymes d -glucaric acid concentration was increased to ~ 2.5 g L −1 (Moon et al. 2010 ). The myo-inositol oxygenase (MIOX) with its di-iron center and low activity was suggested to be the rate-limiting step (Moon et al. 2010 ). The MIOX activity was subsequently improved by using an N -terminal fusion of small ubiquitin-related modifier (SUMO) to MIOX, showing 75% increase in myo-inositol to d -glucaric acid conversion (Shiue and Prather 2014 ). Overexpression of myo-inositol-1-phosphate phosphatase from E. coli was tested for further enhancement of the process and the flux of d -glucose from catabolism towards myo-inositol-1-phosphate was redirected by deletion of the phosphoglucose isomerase (Pgi) and glucose 6-phosphate dehydrogenase (Zwf) encoding genes in E. coli (Shiue et al. 2015 ). This resulted in an increased yield of d -glucaric acid from d -glucose (yield 0.73 g g −1 with titer of 1.19 g L −1 , d -xylose as supplementing carbon source). Another approach used to decrease the flux to glycolysis was altering Pfk activity (Brockman and Prather 2015 ; Gupta et al. 2017 ; Hou et al. 2020 ). The d -glucaric acid pathway has also been used as a demonstration pathway for different synthetic biology approaches including use of MAGE (Raman et al. 2014 ), and small molecule reporter (Rogers and Church 2016 ). Dynamic pathway regulation with a quorum sensing based system or myo-inositol biosensor (Doong et al. 2018 ; Verma et al. 2022 ), regulation of Pgi translation by a d -fructose dependent control system (Qu et al. 2018 ), and NAD + regeneration system (Su et al. 2020 ) have been applied for d -glucaric acid production in E. coli . d -Glucaric acid production has also been demonstrated by in vitro conversion (Lee et al. 2016 ; Petroll et al. 2020 ; Su et al. 2019 ). Without myo-inositol addition, volumetric titers have remained between 1 and 2.5 g L −1 , although a recent study reported 5.35 g L −1 for intra plus extracellular titer in E. coli (Su et al. 2020 ) (reviewed by (Chen et al. 2020 )).
Yeast are considered advantageous for organic acid production because of their low pH tolerance and robustness (Abbott et al. 2009 ). Yeasts have also been engineered for production of d -glucaric acid, although more recently than E. coli (Table 1 ). In 2016 Gupta et al. first engineered Saccharomyces cerevisiae for production of d -glucaric acid by expressing the d -glucaric acid pathway genes coding for inositol monophosphatase, myo-inositol-1-phosphate synthase, myo-inositol oxygenase either from Mus musculus or from Arabidopsis thaliana and uronate dehydrogenase in a opi1 deletion background (Gupta et al. 2016 ). The engineered yeast produced a maximum titer of 0.56 g L −1 in batch culture, and 0.98 g L −1 in fed-batch. Liu et al. ( 2016 ) introduced the pathway into Pichia pastoris and produced 0.107 g L −1 d -glucaric acid from d -glucose. Since these pioneering studies further pathway engineering approaches by e.g. enzyme and expression optimization, use of scaffolds, or by improving viability, have increased d -glucaric acid titers up to 9.5 g L −1 (Fang et al. 2022 , 2023 ; Li et al. 2021 ; Zhang et al. 2020 ). In general, co-feeding of myo-inositol has resulted in higher final d -glucaric acid titers, up to 12.96 g L −1 in S. cerevisiae (Chen et al. 2018 ; Fang et al. 2022 ; Guo et al. 2022 ; Gupta et al. 2016 ; Li et al. 2021 ; Marques et al. 2020 ; Zhao et al. 2021 ), and 6.61 g L −1 in P. pastoris (Liu et al. 2016 ). The reported yields on d -glucose using yeast are scarce, and typically below 0.1 g g −1 , although recently a yield of 0.216 g g −1 (Guo et al. 2022 ) was reported (Table 1 ).
In E. coli deletion of the phosphoglucose isomerase (Pgi) and glucose 6-phosphate dehydrogenase (Zwf) encoding genes resulted in 2.9-fold (Qu et al. 2018 ), or nearly 18-fold higher yield on d -glucose (Shiue et al. 2015 ), compared to when the genes were present. The phosphoglucose isomerase encoding gene has not been deleted from S. cerevisiae or P. pastoris strains engineered for d -glucaric acid production. The phosphoglucose isomerase (Pgi1p) -deficient S. cerevisiae strains metabolize d -glucose only poorly, whereas the Pgi-deficient strains of E. coli are able to grow on d -glucose (Vinopal et al. 1975 ). Even relatively low (< 2 g L −1 ) d -glucose concentrations inhibit or reduce the growth of S. cerevisiae Pgi1p-deficient strains, possibly because of ATP depletion or glucose-6-phosphate accumulation (Maitra 1971 ). The accumulation of glucose-6-phosphate, the first metabolite in the d -glucaric acid pathway, could be advantageous for directing d -glucose flux from glycolysis to d -glucaric acid and thus improve the yield of d -glucaric acid on d -glucose. We introduced the d -glucaric acid pathway into a Pgi1p-deficient S. cerevisiae strain (Fig. 1 ) and studied d -glucaric acid production from monomeric and polymeric d -glucose substrates in shaken flasks and controlled bioreactor conditions with varying nitrogen concentrations. The formation of myo-inositol and d -glucaric acid, and the absence of d -glucuronate was confirmed by using 13 C-labelled d -glucose and GC–MS analysis. | Materials and methods
Strains and strain construction
E. coli strains DH5alpha or TOP10 were used for cloning steps, plasmid propagation and storage, and the Saccharomyces cerevisiae strain FY834 for recombination cloning. S. cerevisiae CEN.PK2-1D (VW-1B; MATα, leu2-3/112 ura3-52 trp1-289 his3∆1 MAL2-8 c SUC2 ) renamed H1346, and the pgi1 -deficient strain H2493, described previously (Verho et al. 2002 ) were the parental strains in which the d -glucaric acid biosynthetic pathway was expressed.
Strain H2493 was cured from tryptophan and/or leucine auxotrophies by transforming it with PCR fragments amplified with primers BCoreMT_LEU2_F and BCoreMT_LEU2_R or BCoreMT_TRP1_F and BCoreMT_TRP1_R (Table 2 ), using genomic DNA from S. cerevisiae strain S288C as a template, and selecting colonies on plates lacking leucine and/or tryptophan, as appropriate.
The myo-inositol oxygenase encoding gene Miox from Mouse was obtained as a synthetic gene, codon optimized for S. cerevisiae (Gene Art, Germany). S. cerevisiae genomic DNA (from S288C or CEN.PK2-1D) was used to amplify the INO1 (YJL153C) and INM1 (YHR046C) genes (resulting amino acid sequences are the same in both genomes). The uronate dehydrogenase (Udh) gene of Agrobacterium tumefaciens ( uro1 , GI:223,717,948) (Yoon et al. 2009 ; Boer et al. 2010 ) was obtained from the plasmid described by Boer et al. ( 2010 ). Miox , INO1 and uro1 were cloned into pRS426 based (B712, (Christianson et al. 1992 )) plasmid B5054 (Salusjärvi et al. 2017 ) that has three promoter-terminator pairs pTEF1 and t ADH1 , pTPI1 and tCYC1 , pPGK1 and tPGK1 by homologous recombination using primers BcoreMT_MIOX_F and BcoreMT_MIOX_R for Miox , BcoreMT_INO1_F and BcoreMT_INO1_R for INO1 , and BcoreMT_URO1_F and BcoreMT_URO1_R for uro1 (Table 2 ), resulting in plasmid B5310. The INM1 gene was expressed either as a single copy genomic integrant or from a multicopy plasmid. INM1 was amplified with primers II_IMP_chr8_BamHI/EcoRI/F and II_IMP_chr8_Bam/R, digested with BamHI and cloned between the PGK1 promoter and terminator in a YEplac195-based multicopy vector B1181 (Toivari et al. 2010a ) digested with BglII to create plasmid INM1-B1181. The pPGK1 - INM1 - tPGK1 cassette was subsequently moved to YEplac181 (B548, (Gietz and Sugino 1988 )) by releasing the cassette with HindI II and ligating it to the Hind III site of YEplac181 resulting in plasmid B5154. For genomic integration the INM1 was targeted to GRE3 locus. The 315 bp and and 290 bp regions of GRE3 gene were amplified from the H1346 genomic DNA with primer pairs S.c GRE3 nt-5frw and S.c GRE3 nt310rev and S.c GRE3 nt677frw and S.c GRE3 nt966rev, respectively, where the numbers are relative to nucleotide A in ATG of the GRE3 gene. The BamH I site for cloning the gene expression cassette was included in the 315 bp region. The 315 bp region was ligated into a Pvu II and BsiW I linearised plasmid pUG6 (Gueldener et al. 2002 ), and the plasmid obtained was then cut with EcoR V and Spe I for introducing the 290 bp region. The resulting plasmid, containing the KanMX cassette flanked by the S.cerevisiae GRE3 regions, was named pMLV84. The pMLV84 was digested with BamH I and the pPGK1 - INM1 - tPGK1 cassette digested with Hind III from INM1-B1181, both fragments were blunt-ended and ligated resulting in INM1-pMLV84. The integration cassette, released with Not I, was transformed into the pgi1 -deficient strain cured for TRP and LEU. A cassette without INM1 was transformed to create a control strain.
The INO1 - Miox - uro1 B5310 plasmid was introduced to the parental strain H1346 (with intact PGI1 ) with or without the INM1 plasmid B5154, resulting in strains H4254 and H4355, respectively. The INM1 was integrated to the GRE3 locus of H1346 as described for the pgi1 strains, and the plasmid B5310 was introduced, resulting in strain H5156. The INO1 - Miox - uro1 plasmid B5310 was introduced to the pgi1 -deficient strain cured with TRP1 and LEU2 , with or without INM1 integration, resulting in strains H4350 and H4344, respectively. The B5310 plasmid was also introduced to the pgi1 -deficient strain cured for TRP, together with the INM1 plasmid B5154, resulting in strain H4346 or to pgi1 -deficient strain with KMX integrated into GRE3 , resulting in strain H4351. Corresponding control strains with empty vectors B712 + B548 H4345, and H4349 were also created. The final strains used in the study are listed in Table 3 .
Media and culture conditions
Yeast strains were cultured in 20 or 50 mL volume on modified synthetic complete medium (YSC, (Sherman et al. 1983 )) without uracil and/or leucine, with 2% (w/v) d -glucose or the d -glucose and d -fructose concentrations indicated in the results, in 100 or 250 mL Erlenmeyer flasks, respectively, at 250 rpm, 30 °C. The pgi1 -deficient strains were grown with 0.5 g d -glucose L −1 and 20 g d -fructose L −1 as carbon source for maintenance and biomass generation. The (NH 4 ) 2 SO 4 concentration in YSC was reduced from 5 to 1.5 g L −1 for nitrogen-restricted batch cultures. The flask cultures with pgi1 -deficient strains were buffered by CaCO 3 (1%, w/v). For slow glucose release, EnBase B (Biosilta, Oulu, Finland) was prepared according to manufacturer’s instructions, except pH was adjusted to pH 5.6 with HCl and enzyme (5 μL to 20 mL culture broth) added after 24 h incubation, because d -glucose (~ 1 g L −1 ) was present in the prepared medium. Cellulose (α-cellulose, Sigma) was provided as 16.6 g L −1 , with 4.5 mL Cellulast 1.5 L added to a volume of 250 mL in bioreactor culture.
For larger scale cultures, yeast were grown in 250 to 500 mL medium (YSC-ura, or YSC-ura-leu) in Multifors bioreactors (max. working volume 500 mL, Infors HT, Switzerland) at pH 5.5, 30 °C, 1 volume air [volume culture] −1 min −1 (vvm) and 500 rpm agitation with 2 marine impellors, as previously described (Toivari et al. 2010a , b ). The pH was maintained constant by addition of 2 M NaOH or 1 M H 2 PO 4 . Clerol antifoaming agent (Cognis, France, 0.08–0.10 μL −1 ) was added to prevent foam formation. d -Glucose concentration in the culture supernatant was monitored by HPLC and d -glucose, d -fructose and/or ethanol were added as pulses to keep the d -glucose concentration between 0.5 and 5 g L −1 , while providing d -fructose or ethanol as an energy and carbon source for growth.
Biomass was measured as optical density (OD) at 600 nm (OD 600 ) or as dry weight. For dry weight, samples were collected in 2 mL pre-dried, pre-weighed microcentrifuge tubes, washed twice with equal volume distilled water and dried at 105 °C.
The number of metabolically active (vital) cells was determined microscopically by methylene blue (0.25 g L −1 in 0.04 M NaCitrate buffer pH 8.3) staining. For the purpose of clarity the metabolically active cells will be referred to as viable and the inactive as metabolically inactive cells. Both empty and stained cell were counted as metabolically inactive cells.
Chemical analyses
For determination of intracellular d -glucaric acid concentration, cells were collected from 10 mL culture. Cell pellets were washed with 1.8 mL of 0.9% w/v NaCl solution (9 g L −1 ), and then 1.8 mL deionised water, and frozen at − 20 °C to disrupt membranes. The frozen pellets were freeze-dried using a Christ Alpha 2–4 lyophiliser (Biotech international, Belgium), removing all excess moisture. Intracellular d -glucaric acid was extracted from the lyophilized pellets (6 to 45 mg biomass) in 5 mM H 2 SO 4 (0.5 mL) as described by Nygård et al. ( 2011 ) for extraction of d -xylonate. Cell debris was removed by centrifugation and the supernatant analysed by GC–MS. Intracellular concentrations are given as mg per g dry biomass. For a conservative estimate of intracellular concentration, assume that 1 g dry cell weight corresponds to 2 mL cell volume (de Koning and van Dam 1992 ; Gancedo and Serrano 1989 ). This estimate is conservative since it does not take into account the volume of intracellular organelles, variation in cell wall thickness, or the contribution of dead cells to the dry biomass.
Concentrations of d -glucose and d -fructose, ethanol, glycerol, and acetate, were analysed by HPLC using a Fast Acid Analysis Column (100 mm × 7.8 mm, BioRad Laboratories, Hercules, CA) linked to an Aminex HPX-87H column (BioRad Labs, USA) with 5 mM H 2 SO 4 as eluent and a flow rate of 0.3 mL min −1 . The column was maintained at 55 °C. Peaks were detected using a Waters 410 differential refractometer and a Waters 2487 dual wavelength UV (210 nm) detector. The ability of d -glucaric acid and its dissociated form d -glucarate to form glucaric acid 1,4-lactone or dilactone needs to be considered in the analytics. d -Glucaric acid, glucaric acid 1,4-lactone, d -glucuronate and myo-inositol were quantified with GC–MS. The d -glucaric acid concentration is presented as the sum of d -glucaric acid and glucaric acid 1,4-lactone. Samples (100 μL), with arabitol as internal standard, were evaporated to dryness and silylated by adding 100 μL pyridine, 100 μL chlorotrimethylsilane and 100 μL N,O-Bis(trimethylsilyl)trifluoroacetamide (BSTFA). Derivatisation was performed at + 70 °C for 60 min. Derivatised samples (1 μL) were subjected to GC–MS analysis (Agilent 6890 Series, USA combined with Agilent, 5973 Network MSD, USA and Combipal injector, Varian Inc., USA). Analytes were injected on split mode (30:1) (200 °C) and separated on a ZB-1HT INFERNO capillary column (30 m × 0.25 mm) with a phase thickness 0.25 μm (Phenomenex, Denmark). Helium (0.9 mL min −1 ) was used as carrier gas in constant flow mode. The temperature program started at 70 °C with 3 min holding time and then increased 10 °C min −1 up to 320 °C. Mass selective detector (MSD) was operated in electron-impact mode at 70 eV, in the full scan m/z 40–550. The ion source temperature was 250 °C and the interface was 280 °C. Compounds were identified according to corresponding standards and with the Palisade Complete 600 K Mass spectral library (Palisade Mass Spectrometry, USA).
d -Glucaric acid concentrations were also measured as the lactone using the hydroxymate method (Lien 1959 ) as described by Toivari et al. ( 2010b ). The lactone assay was used for analysing samples from cultures grown with cellulose and although it correlated well with GC–MS results (Fig. 2 ), the assay would also measure d -gluconic acid, which was probably present in the sample. Thus, lactone measurements should be considered as indicative, but may be over-estimates. | Results
d -Glucose conversion to d -glucaric acid with pgi1- deficient S. cerevisiae
The pgi1 -deficient S. cerevisiae CEN.PK2-1D strains expressed INO1 , Miox and uro1 from a multicopy plasmid and INM1 either as the native genomic copy (H4344), from an additional integrated copy under a constitutive promoter (H4350), or from a multicopy plasmid (H4346) (Table 3 ). Strains H4349 or H4345 without the glucarate pathway were used as controls. Similar strains, with the d -glucaric acid pathway, but with intact PGI1 gene (H4354, H4355, H4356), were also created (Table 3 ).
The strains with intact PGI1 gene were grown in flask cultures with 2% (w/v) d -glucose as a sole carbon source. In 48 h at most 35 mg d -glucaric acid L −1 was produced (data not shown).
Although d -glucose inhibits the growth of pgi1 -deficient strains at concentrations above 2 g L −1 , concentrations up to 4 or 5 g L −1 could be tolerated in cultures with sufficient biomass (OD600 12). With 4 g d -glucose L −1 and 20 g d -fructose L −1 , strain H4344, with the d -glucaric acid pathway with endogenous INM1 , produced 240 ± 11 mg d -glucaric acid L −1 in 98 h, whereas the control strain H4345 did not produce any d -glucaric acid. In addition, 370 ± 11 mg L −1 of myo-inositol was detected from strain H4344 compared to 100 ± 26 mg L −1 from H4345. All d -glucose and d -fructose were consumed, biomass, ethanol, acetate and glycerol were formed and partly consumed.
The other pgi1 -deficient S. cerevisiae strains H4346, with the d -glucaric acid pathway with INM1 on a multicopy plasmid, and the control strain H4349 were cultivated in the presence of 3.4 g L −1 uniformly labelled d -glucose. To decrease biomass formation, lower d -fructose (12 g L −1 ) and ammonium sulphate concentrations were used with initial high biomass (OD600 13). In 73 h, 280 mg L −1 of labelled d -glucaric acid and 330 mg L −1 labelled myo-inositol was detected. Thus, altogether, 610 mg L −1 of 13 C-labelled d -glucose was directed to the d -glucaric acid pathway confirming formation of d -glucaric acid from d -glucose via myo-inositol. The d -glucaric acid pathway intermediate d -glucuronate was not detected. With the control strain H4349 no d -glucaric acid, or d -glucuronate, was detected. With higher d -glucose concentration the pgi1 -deficient strains lost viability (data not shown).
Evaluation of different feeding strategies for production of d -glucaric acid in bioreactor cultures
The pgi1 -deficient strains were further studied in bioreactor cultures with d -glucose pulses in nitrogen sufficient or restricted conditions. When pgi1 -deficient S. cerevisiae strain H4346 expressing the d -glucaric acid pathway was grown in pH regulated batch cultures at pH 5.5, 1.3 g d -glucaric acid L −1 (yield 0.12 ± 0.01 g d -glucaric acid [g d -glucose consumed] −1 ) were produced (H4346, Figs. 3 , 4 a). d -Glucaric acid was produced at a rate of approximately 7 mg L −1 h −1 . Myo-inositol was detectable in the culture supernatant after 45 h (0.06 g L −1 ) and accumulated in proportion (10 ± 1%) to d -glucaric acid to a final concentration of 0.15 g L −1 . Up to 0.7 g glycerol L −1 and 2.5 g acetate L −1 were also produced, but acetate was consumed when ethanol was not available as a carbon source. Only 4.7 ± 0.1 g biomass L −1 was produced from the 15.9 g d -fructose L −1 , 14.7 g d -glucose L −1 and 6.1 g ethanol L −1 which were provided (yield of biomass on substrate ~ 0.17 g [g substrate consumed] −1 ).
The intracellular concentration of d -glucaric acid was constant at 10 ± 1 mg [g biomass] −1 . Intracellular myo-inositol concentrations (6 ± 1 mg [g biomass] −1 ) were similar to those of d -glucaric acid, and the ratio of myo-inositol to d -glucaric acid in the cytoplasm was much higher than in the supernatant.
Methylene blue staining showed that 63 ± 2% of the cells were metabolically inactive after 70 h (Fig. 3 ). A high proportion of metabolically inactive cells was also observed for the pgi1 -deficient parent strain without the d -glucaric acid pathway in flask cultures (58 ± 2%, after 43 h on SC medium with 12 g d -fructose L −1 and 0.5 g d -glucose L −1 and 92 ± 1% metabolically inactive cells with 12 g d -fructose L −1 and 3.4 g d -glucose L −1 ), indicating that the high level of metabolically inactive cells was a response to the relatively high concentration of d -glucose in the medium, not to d -glucaric acid production. Some cells adapted to the presence of d -glucose in the medium after ~ 90 h in the bioreactor and the proportion of metabolically inactive cells did not increase much after 70 h.
The pgi1 -deficient d -glucaric acid strain H4346 produced 0.71 g d -glucaric acid L −1 after 160 h, with a yield of 0.23 g d -glucaric acid [g d -glucose consumed] −1 , in nitrogen-restricted medium (Fig. 4 b). There was 0.12 g myo-inositol L −1 also produced. No adaptation to the presence of d -glucose occurred, and this strain produced only 3.7 ± 0.1 g biomass L −1 , with 91 ± 0.5% of cells metabolically inactive after 138 h. The concentration of intracellular d -glucaric acid (8 ± 2 mg [g biomass] −1 ) and myo-inositol (9 ± 4 mg [g biomass] −1 ) were similar to those observed in the nitrogen sufficient culture. Another pgi1 -deficient d -glucaric acid strain with INM1 integrated (H4350) followed a similar trend in d -glucaric acid production but produced less biomass and myo-inositol (Fig. 4 b, lower panel).
To lower the d -glucose toxicity by fed-batch cultivation the pgi1 -deficient d -glucaric acid strains, H4350 and H4346 were grown with d -glucose and d -fructose in the feed. d -Glucose concentration was maintained at less than 2.2 g L −1 during the first 120 h feeding, with d -fructose concentrations less than 1.5 g L −1 The strains produced 0.32–0.34 g d -glucaric acid L −1 after 117 h, with yield of 0.12 g d -glucaric acid [g d -glucose consumed] −1 , but no further production after 117 h. The d -glucose concentration increased after 117 h, reaching concentrations of 3.7–4.5 g L −1 by 160 h (data not shown). Cell viability was measured at 138 h, with 63 ± 4% and 71 ± 4% metabolically inactive cells for strains H4350 and H4346, respectively.
Polymeric substrates for production of d -glucaric acid
To evaluate use of cheap polymeric substrates for production of d -glucaric acid by the pgi1 -deficient S. cerevisiae strains a commercial polysaccharide and α-cellulose were used as carbon sources and hydrolytic enzymes used for d -glucose release during d -glucaric acid production.
The d -glucose slowly released from the commercial polysaccharide was rapidly consumed and no d -glucose was detected in the culture media in shake flask cultures. In 100 h 12 g L −1 of d -glucose (as approximated from medium without cells) was released. To decrease production of biomass the amount of nitrogen provided was reduced. d -Fructose provided as a carbon source was consumed during the first 20 h. The pgi1 -deficient, d -glucaric acid-pathway-expressing strains H4346 and H4351 produced 0.79 and 0.51 g L −1 d -glucaric acid within 100 h, respectively. Myo-inositol (~ 0.2 g L −1 ) and biomass were also formed. In bioreactors a maximum ~ 0.65 g d -glucaric acid L −1 was produced from the commercial polysaccharide, and 0.32 (pH 5.5) and 0.35 (pH 7) g d -glucaric acid L −1 was produced from α-cellulose within less than 250 h (Fig. 5 a, lactone assay). In the cellulose pH 5.5 culture d -glucose started to accumulate from the beginning of the culture, and in the culture with the commercial polysaccharide after 50 h, whereas with cellulose pH 7 culture d -glucose remained low throughout the culture (Fig. 5 b). | Discussion
In S. cerevisiae the major flux of d -glucose is through glycolysis, and only a small fraction, approximately 1–4%, is channelled to the PPP, depending on strain and culture conditions (Blank et al. 2005 ; Fiaux et al. 2003 ; Gancedo and Lagunas 1973 ; Maaheimo et al. 2001 ; Nidelet et al. 2016 ). In the pgi1 -deficient S. cerevisiae strains, the block to glycolysis leads to accumulation of glucose-6-phosphate (e.g. Ciriacy and Breitenbach 1979 ; Heux et al. 2008 ; Maitra 1971 ). Possibly, this glucose-6-phosphate accumulation could increase the flux of glucose-6-phosphate to myo-inositol and further to d -glucaric acid. Indeed, the yield of d -glucaric acid on d -glucose (0.12 g g −1 ) in the bioreactor with pulsed d -glucose was four-fold higher in our pgi1 -deficient strains compared to the value in a strain with intact Pgi1p reported by Gupta et al. ( 2016 ) and was further increased to 0.23 g g −1 (seven fold improvement over Gupta et al. ( 2016 )) by restricting the nitrogen supply. Recently, Guo et al. reported a yield of 0.216 g g −1 (Guo et al. 2022 ), close to our best yield. However, considering that our strains accumulated inositol at 12–17% of the d -glucaric acid concentration, it is clear that the yield could be even higher if all myo-inositol would be converted to d -glucaric acid. In S. cerevisiae , the PGI1 deletion alone increased the yield, compared to E. coli where the zwf deletion was also needed (Shiue et al. 2015 ). In yeast, deletion or downregulation of both PGI1 and glucose-6-phosphate dehydrogenase encoding gene ZWF1 has not been reported, but during preparation of this work Zhao et al. ( 2023 ) reported that downregulation of ZWF1 in S. cerevisiae improved d -glucaric acid titer by 22.4%.
In volumetric terms our pgi1 -deficient strains produced about half of the d -glucaric acid concentration reported by Gupta et al. ( 2016 ) in shake flasks (average 0.26 compared to 0.54 g L −1 ), as confirmed by using 13 C-labelled d -glucose. The titer was improved to 1.3 g L −1 in the bioreactor by providing the d -glucose in pulses, which was higher than reported by Gupta et al. ( 2016 ), but lower compared to recently reported volumetric titers of up to 9.5 g L −1 (Table 1 ). Also, the highest yield was observed with restricted nitrogen supply, and in this condition the volumetric concentration was only 0.71 g L −1 . Interestingly, our parent strains with intact Pgi1p produced only ~ 30 mg L −1 of d -glucaric acid as measured with GC–MS. This in line with the results of Liu et al. ( 2016 ) who found that P. pastoris produced 108 mg L −1 d -glucaric acid on d -glucose, but much lower compared to other studies (Table 1 ). In our strains the OPI1 gene was intact. We hypothesized that expression of INO1 under a constitutive promoter is comparable to the effect of OPI1 deletion because Opi1p is reported to regulate INO1 at the transcriptional level (Henry et al. 2014 ; Ye et al. 2013 ). However, possibly the OPI1 deletion could improve d -glucaric acid production also in the pgi1 -deficient strains by a still unknown mechanism. In addition, differences in the strain background, pathway genes, and culture conditions, e.g. aeration and medium composition, may contribute to the differences in D-glucaric acid amounts produced.
The S. cerevisiae pgi1 -deficient strains do not tolerate d -glucose concentration above 2 g L −1 , potentially decreasing their viability, and suitability to larger scale processes. Our fed-batch cultures did not drastically improve viability and more optimised feeding strategies would be needed. The viability was better on polymeric substrate α-cellulose where hydrolytic enzymes were used to release d -glucose, but the rate of d -glucose release would require further optimization to increase d -glucaric acid production. Recently, consolidated bioprocess (CBP) with concomitant sugar release and conversion to products has been developed for d -glucaric acid production (Fang et al. 2022 , 2023 ; Li et al. 2021 ). This could be an option for the pgi1 -deficient strains.
d -Glucaric acid production with S. cerevisiae has developed rapidly during recent years, especially considering volumetric titers (Table 1 ). New myo-inositol oxygenases (Marques et al. 2020 ), and ways for improved viability (Guo et al. 2022 ) would be highly interesting to test in the pgi1 -deficient S. cerevisiae . Also, uncoupling growth and production, or regulating the Pgi1p amount by synthetic promoters, gene switches, and/or degradation approaches like those implemented in E. coli (Brockman and Prather 2015 ; Gupta et al. 2017 ; Hou et al. 2020 ; Qu et al. 2018 ) could improve d -glucaric acid production and viability in the pgi1 -deficient S. cerevisiae . | d -Glucaric acid is a potential biobased platform chemical. Previously mainly Escherichia coli, but also the yeast Saccharomyces cerevisiae, and Pichia pastoris, have been engineered for conversion of d -glucose to d -glucaric acid via myo-inositol. One reason for low yields from the yeast strains is the strong flux towards glycolysis. Thus, to decrease the flux of d -glucose to biomass, and to increase d -glucaric acid yield, the four step d -glucaric acid pathway was introduced into a phosphoglucose isomerase deficient (Pgi1p-deficient) Saccharomyces cerevisiae strain. High d -glucose concentrations are toxic to the Pgi1p-deficient strains, so various feeding strategies and use of polymeric substrates were studied. Uniformly labelled 13 C-glucose confirmed conversion of d -glucose to d -glucaric acid. In batch bioreactor cultures with pulsed d -fructose and ethanol provision 1.3 g d -glucaric acid L −1 was produced. The d -glucaric acid titer (0.71 g d -glucaric acid L −1 ) was lower in nitrogen limited conditions, but the yield, 0.23 g d -glucaric acid [g d -glucose consumed] −1 , was among the highest that has so far been reported from yeast. Accumulation of myo-inositol indicated that myo-inositol oxygenase activity was limiting, and that there would be potential to even higher yield. The Pgi1p-deficiency in S. cerevisiae provides an approach that in combination with other reported modifications and bioprocess strategies would promote the development of high yield d -glucaric acid yeast strains.
Keywords
Open Access funding provided by Technical Research Centre of Finland. | Acknowledgements
This study was financially supported by the Academy of Finland through the grants Centre of Excellence in White Biotechnology—Green Chemistry (grant 118573), and SA SBio (grant 310191), and by the European Commission through the Seventh Framework Programme (FP7/2007-2013) under grant agreement N° FP7-241566 BIOCORE which are gratefully acknowledged. We thank Toni Paasikallio for assistance with fermentations, and Tuulikki Seppänen-Laakso, Kaarina Viljanen and Matti Hölttä for assistance with GC-MS measurements.
Author contributions
All authors contributed to the study conception and design. Strain engineering, shake flask cultures, and HPLC analyses were performed by MT, and M-LV. Bioreactor cultures, HPLC analyses, viability, and lactone assay were conducted and supervised by MGW. MT and MGW drafted the manuscript. All authors participated in result analysis, and commented previous manuscript versions, and approved the final manuscript.
Funding
Open Access funding provided by Technical Research Centre of Finland. This study was financially supported by the Academy of Finland through the grants Centre of Excellence in White Biotechnology—Green Chemistry (grant 118573), and SA SBio (grant 310191), and by the European Commission through the Seventh Framework Programme (FP7/2007-2013) under grant agreement N° FP7-241566 BIOCORE.
Data availability
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
Declarations
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose.
Ethical approval
This article does not contain any studies with human or animal subjects. | CC BY | no | 2024-01-15 23:41:53 | Biotechnol Lett. 2024 Dec 8; 46(1):69-83 | oa_package/0a/9a/PMC10787697.tar.gz |
||
PMC10787698 | 0 | Introduction
Imine reducing enzymes produce a chiral amine often with excellent stereoselectivity. Therefore, there is a constant interest towards them in the pharmaceutical industry. This interest is driven by the high number of active pharmaceutical ingredients (APIs) bearing a chiral amine moiety as illustrated among the FDA-approved small molecule drugs between 2015 and 2020 (Bhutani et al. 2021 ). Key chiral amine intermediates of these APIs are often produced enzymatically using transaminases (TAs) (Gomm and O’Reilly 2018 ), amine dehydrogenases (AmDHs), imine reductases (IREDs), or monoamine oxidases (MAOs) (Sharma et al. 2017 ; Cosgrove et al. 2018 ; Patil et al. 2018 ). These enzymes result in chiral primary amines (TAs, AmDHs), perform asymmetric reduction of preformed imines (IREDs), or stereoselectively oxidize cyclic amines to imines (MAOs). Recently, a subclass of IREDs was found to catalyze reductive amination of carbonyls with small primary amines in close to stoichiometric ratio (Aleku et al. 2017 ); thus, they were termed reductive aminases (RedAms). The applications of RedAms are expanding ever since (Schober et al. 2019 ), but their substrate scope is still limited to aliphatic or smaller aromatic amines, while reactions with more functionalized polar amines are lagging behind.
Opine dehydrogenases (ODHs) represent a unique and yet largely underexplored class in this rapidly expanding landscape of biocatalysts for reductive amination (Telek et al. 2023 ). They catalyze the coupling of α-ketoacids with α-amino acids yielding a variety of opine products. These molecules have diverse structures and physiological roles, and different organisms evolved specialized ODHs for their synthesis (Kato and Asano 2002 ; Harcet et al. 2013 ; McFarlane and Lamb 2020 ; Matveeva and Otten 2021 ). Despite the large evolutionary distance and low apparent sequence similarity among these ODHs, they share general structural and catalytic features (Sharma et al. 2017 ), while their substrate and cofactor preference can vary significantly. The fact that they are able to perform reductive amination with close to stoichiometric ketoacid/amino acid ratio makes them promising prospects to be utilized as industrial biocatalysts. Considering that their natural substrates are chemically very different from the typical imine reductase or reductive aminase substrates, they can fill in a valuable complementary role next to those classes. Highly functionalized polar secondary amines (such as opine derivatives) are potential building blocks of bioactive compounds such as peptidomimetics (Li Petri et al. 2022 ). Their stereoselective synthesis by ODHs can be easily envisaged. However, this potential of ODHs is largely unexploited probably due to the significant difference between their native substrates and the potential target substrates (Ducrot et al. 2021 ) as well as their zwitterionic and highly polar nature. The only example of their utilization as biocatalysts is the ODH from Arthrobacter sp. 1C ( Ar ODH), which was discovered by Asano et al. ( 1989 ) and used for the synthesis of secondary amine dicarboxylic acids with apolar sidechains (Kato et al. 1996 ). A decade later, Codexis engineered this enzyme to accept substrates resembling the classic imine reductase substrates like cyclohexanone or n -butylamine. This work—that was only disclosed in a patent (Chen et al. 2013 )—also shows that ODHs have to undergo significant engineering to accept substrates largely deviating from their natural ones. However, protein engineering towards structurally similar analogues with orthogonally protected or masked functional groups might be less exhaustive. Isolation of such derivatives could also be more straightforward enabling more versatility for further functionalization. There are a few other well-characterized enzymes among ODHs such as the octopine dehydrogenase from Pecten maximus ( Pm OcDH) (Smits et al. 2008 ) or metallophore biosynthetic enzymes from Staphylococcus aureus ( Sa ODH), Pseudomonas aeruginosa ( Pa ODH), and Yersinia pestis ( Yp ODH) (McFarlane et al. 2018 ). These all have different substrate preferences; Pm OcDH is specific for pyruvate and l -arginine, while the enzymes from pathogens use exotic nicotianamine-like amino acid derivatives and pyruvate or α-ketoglutarate. We may add that this chemical diversity has not yet been exploited fully for biocatalysis. In addition, there are a lot of sequences in public databases annotated as opine dehydrogenases, but the information on their activity and substrate preferences is lacking; therefore, the above-mentioned diversity is probably even greater. Metagenome search is a method that is capable of sampling a diversity while simultaneously selecting for advantageous properties (Robinson et al. 2021 ). It involves the interrogation of the whole genetic information extracted from a certain environment. Today, we have the technology of finding genes and enzymes in such metagenomic samples, without identifying the—possibly unknown—host organisms (Kerepesi et al. 2014 , 2015 ; Kerepesi and Grolmusz 2016a , b , 2017 ). This allows access to potential biocatalysts from previously uncharacterized species that could exhibit altered substrate specificity, increased thermotolerance or catalytic activity. This method was already applied to find novel enzymes for chiral amine synthesis such as transaminases (Baud et al. 2017 ; Leipold et al. 2019 ) or amine dehydrogenases (Caparco et al. 2020 ). For opine dehydrogenases, a recent paper describes one novel metagenome-derived alanopine dehydrogenase (Kaličanin et al. 2023 ), but no further enzymes are reported to date.
In our study, we set out to expand the class of ODHs with new enzymes with useful properties for industrial biocatalysis. We applied metagenome mining to identify novel ODHs from extreme environments. The catalytic activity and substrate preference of the newly discovered enzymes were investigated under conditions relevant for biocatalytic applications. Their unique structural and sequential features were thoroughly analyzed to aid future engineering of these enzymes towards industrial applications with specific substrates. | Materials and methods
All reagents and solvents, amino acids, and ketoacids were commercially available and used without further purification.
HPLC methods
Analytical HPLC measurements were performed on Agilent Technologies 1260 LC system equipped with a DAD detector using Gemini® 3 μm NX-C18, 50 mm × 3.00 mm i.d. 110 Å column, and 5 mM aqueous NH 4 HCO 3 solution and MeCN as eluents in gradient mode. Analytical LC–MS was performed on an Agilent Technologies 1200 LC system equipped with Agilent 6140 quadrupole MS, operating in positive or negative ion electrospray ionization mode (molecular weight scan range was 100 to 1350 m/z) with parallel UV detection using Gemini® 3 μm NX-C18, 50 mm × 3.00 mm i.d. 110 Å column, and 5 mM aqueous NH 4 HCO 3 solution and MeCN as eluents in gradient mode. Purifications were carried out with Teledyne Isco preparative HPLC using Gemini® 5 μm NX-C18, 250 mm × 50 mm i.d. 110 Å column, and 5 mM aqueous NH 4 HCO 3 solution and MeCN as eluents in gradient mode.
NMR measurements
1 H NMR and 13 C NMR spectra were recorded on a Bruker Avance Ultrashield 400 (100 MHz 13 C) instrument with Bruker Prodigy Cryo Probe and were internally referenced to residual protium solvent signals (note: D 2 O referenced at 4.70 ppm). Samples (5–10 mg) were dissolved in 0.5 mL D 2 O. In the case of Fmoc- 2a products, 4 μL trifluoroacetic acid (TFA) was added for complete dissolution. Data for 1 H NMR are reported as follows: chemical shift ( δ ppm), multiplicity ( s = singlet, d = doublet, dd = doublet of a doublet, t = triplet, dt = doublet of a triplet, m = multiplet), coupling constant (Hz), and integration. Fmoc-derivatized products form amide rotamers appearing as doubled signals which is indicated by the two δ (ppm) values separated by a slash, e.g., 8.38/8.31 (d/d, J = 1.3/1.4 Hz, 1 H). Assignments of protons are listed on the spectra (Figure S7-S14 ). Data for 13 C NMR are reported in terms of chemical shift, and no special nomenclature is used for equivalent carbons.
High resolution mass spectrometry
HRMS measurements were carried out on an Agilent 6545 Q-TOF mass spectrometer system: mass resolution, 45.000 FWHM @ m/z 2.722 Da; ion source, AJS-ESI; sheath gas temperature, 300 °C; drying gas temperature, 300 °C; ionizing voltage, 2500 V; and nozzle voltage, 1000 V.
Metagenome search
The method is based on an artificial intelligence tool, the hidden Markov models (HMMs) (Yoon 2009 ), described in detail by Szalkai and Grolmusz ( 2019 ), and is demonstrated on our webserver for smaller metagenomes (< 1 GB) at the address https://metahmm.pitgroup.org .
First, we trained our metagenome search tool on ten different, already described opine dehydrogenase sequences (listed in Table S1 ). From these sequences, using multiple alignment with Clustal Omega software as a middle step (Sievers et al. 2011 ), a hidden Markov model was built with the hmmbuild tool (Eddy 2009 ). Applying the Markov model, we selected tens of thousands of short reads from the target metagenomes applying the hmmsearch tool (Eddy 2009 ). The target metagenomes were downloaded from the NCBI Short Read Archive, with accession numbers SRR2915707 (deep-sea sediment metagenome), SRR16646004 (tropical soil metagenome), SRR16606022 (hot spring metagenome, Sativali), and SRR16588181 (hot spring metagenome, Tuwa). The hits from the metagenomes were assembled into the longest possible sequences by the MegaHIT assembler (Li et al. 2015 ), and from these sequences, we selected potential complete genes with start and end codons (Table S2 ). Nucleotide sequence data reported are available in the Third Party Annotation (TPA) Section of the DDBJ/ENA/GenBank databases under the accession numbers TPA: BK063522-BK063532.
Sequence analysis and structural modelling of mODHs
Sequence similarity network (SSN) of the new mODHs and the template ODHs was created using the Enzyme Similarity Tool (EFI-EST, https://efi.igb.illinois.edu/efi-est/ ) (Zallot et al. 2019 ). Multiple sequence alignments and phylogenetic trees were generated using Clustal Omega ( https://www.ebi.ac.uk/Tools/msa/clustalo/ ). Structural modelling of mODHs was carried out by open source AlphaFold prediction algorithm accessed through ColabFold (Mirdita et al. 2022 ). Structures were visualized and analyzed in PyMOL. Electrostatic surface potential calculations were performed using the APBS (Adaptive Poisson-Boltzmann Solver, https://www.poissonboltzmann.org/ ) plugin in PyMOL ( https://www.pymol.org/pymol ). Analysis of binding pockets was aided by Caver Web v1.2 ( https://loschmidt.chemi.muni.cz/caverweb/ ) (Stourac et al. 2019 ).
Enzyme production and purification
Nine selected mODH genes as well as the coding sequence of Ar ODH were codon optimized for E. coli , synthesized by Genescript and cloned into pET14b vector using Nde I and Bam HI cloning sites resulting in N-terminal His-tag proteins (for protein and nucleotide sequences see Table S2 ). Plasmids were transformed into E.coli BL21(DE3) chemically competent cells, and 5 mL overnight cultures were grown in Luria–Bertani (LB) medium supplemented with carbenicillin. Then, 4 mL was added to 500 mL fresh LB medium. The culture was grown at 37 °C to OD = 0.5 and induced by addition of 0.5 mM IPTG. The flasks were then incubated in a shaker at 18 °C overnight. The cells were harvested by centrifugation at 4000 rpm, 4 °C for 30 min. The pellets were resuspended in lysis buffer (50 mM Tris pH = 8.5, 300 mM NaCl, 1 mM benzamidine, 2 mM phenylmethylsulfonyl fluoride (PMSF), 1 mM Tris(2-carboxyethyl)phosphine (TCEP), DNase) and lysed by ultrasonication. The suspension was centrifuged at 11,000 rpm, 4 °C for 30 min. The supernatant was loaded onto equilibrated Ni–NTA column and washed with 15 mM imidazole in lysis buffer. The bound protein was eluted with 250 mM imidazole in lysis buffer and then dialyzed in 50 mM Tris pH = 8.5. The resulting solution was aliquoted after addition of 10% glycerol and then stored at − 20 °C until further use. Total protein concentration was determined based on absorption at 280 nm measured on a NanoDrop spectrophotometer. With this method, six mODHs and Ar ODH could be purified (see Table S2 ). Three mODHs were not expressed in soluble form in this experimental setting.
Site-directed mutagenesis
Mutagenesis experiments were conducted according to the Q5® Site-Directed Mutagenesis Kit Quick Protocol provided by the manufacturer (Table S6 ). KLD reactions were transformed into XL1Blue chemically competent cells and were plated on LB-TC/CAR agarose plates. Single colonies were picked and used to inoculate 5 mL overnight cultures. From these, plasmids were purified according to the NucleoSpin Plasmid Prep protocol. The purified plasmids were sent to sequencing to verify mutations and then used for protein expression.
Differential scanning fluorimetry (Thermofluor) measurements
To 1 mg/mL solutions of enzymes (mODHs, Ar ODH) was added 1000 × SYPRO Orange dye. Twenty-five-microliter samples were loaded onto 96-well plates in triplicates. Fluorescent signal versus temperature was in a CFX96 real-time PCR detection system (Bio-Rad) with a gradient from 25 to 95 °C (0.5 °C steps).
Kinetic measurements
Enzyme activities were measured spectrophotometrically at 30 °C in 96-well half-area plates in a Thermo ScientificTM Multiskan SkyHigh Microplate Spectrophotometer (Thermo Fisher Scientific Inc., Waltham, MA, USA). Standard assays were carried out by using 15 mM amino acid and 0.2 mM NAD(P)H in 100 mM sodium phosphate buffer (pH = 8.0). The reaction was started by the addition of 10 mM pyruvate. The decrease in absorbance was monitored at 340 nm. Initial velocities ( v 0 ) were calculated from the linear section of the plots using an NAD(P)H calibration. For the determination of kinetic constants, amino acid concentrations were varied between 1 and 30 mM, while cofactor concentrations were between 0.04 and 0.5 mM. Depending on the specific activities of each enzyme, enzyme concentrations were fixed between 0.015 and 1.5 μM (Tables S9-S12 ). Kinetic constants were determined from the Michaelis–Menten plots using GraphPad Prism software (Tables S9-S12 , Figures S15-S18 ). Heat stability experiments were carried out by incubating the enzymes at 40, 50, or 60 °C for 1 h before measuring their specific activities.
Small scale biocatalytic reactions, parameter optimizations
For preliminary activity measurements and reaction optimization, the reaction mixtures (500 μL) contained 62.5 mM d -glucose, 6 U/mL GDH (Codexis CDX-901, 50 U/mg), 0.4 mM NAD + /NADP + , 25 mM amino acid substrate, and 5 equivalents of sodium pyruvate substrate in Na-phosphate buffer adjusted to pH 7.5 in a 1.5-mL Eppendorf tube. To the blank mix was added 30 μg purified enzyme (mODHs, Ar ODH). The reaction mixtures were incubated at 30 °C with shaking at 300 rpm for 24 h. Parameters were optimized on the following ranges: pH 6.0–8.0, temperature 30–50 °C, pyruvate equivalence 1.5–5.0, and enzyme loading 7.5–60 μg/mL. To follow the reactions, 100-μL samples were taken and derivatized based on the protocol from Tassano et al. ( 2022 ). Samples were diluted 1:1 with Na-borate buffer (300 mM, pH 9.2), then 200 μL of a solution of Fmoc-Cl (15 mM in ACN) was added, and the samples were shaken for 10 min, 30 °C, and at 1000 rpm. Next, 200 μL solution of amantadine hydrochloride (300 mM in 1:1 H 2 O/ACN) was added. The formed white precipitate was centrifuged 5 min/1500 rpm, and the supernatant was analyzed. Analysis was done on HPLC–MS. The conversion was calculated by measuring the depletion of the amino acid as follows: HPLC area of the Fmoc-amino acid peak in the enzymatic reaction was compared to that of blank reactions carried out under identical conditions without ODH present.
Substrate screening
On a 1-mL 96-deep-well plate, 500 μL reactions were carried out. The reaction mixtures contained 62.5 mM d -glucose, 6 U/mL GDH (Codexis CDX-901, 50 U/mg), 0.4 mM NADP + (or NAD + for Ar ODH), 25 mM amino acid substrate, and 75 mM of ketoacid substrate in Na-phosphate buffer adjusted to pH = 8.0. To the blank mix was added 15 μg purified enzyme (six mODHs, Ar ODH). The reaction mixtures were incubated at 40 °C with shaking at 300 rpm for 24 h. Reactions were performed in triplicates. Sample preparation and calculation of conversion were done as described in the previous section, and analysis was carried out on HPLC.
Preparative scale reactions and purifications
30 mL reaction mixture contained 62.5 mM d -glucose, 6 U/mL GDH (Codexis CDX-901, 50 U/mg), 0.4 mM NADP + , and 25 mM amino acid substrate with 3 equivalents of sodium pyruvate in Na-phosphate buffer adjusted to pH 8.0. To the blank mix was added 900 μg purified enzyme ( Ar ODH or mODH-582). The reaction mixture was stirred at 40 °C for 24 h. After 24 h, the reaction was boiled for 15 min, and then, the precipitate was filtered out. To the filtrate was added 8 equivalents Fmoc-Cl in 60 mL ACN, and the mixture was stirred at 60 °C for 16 h. Afterwards, the pH was adjusted to 7.5, and the mixture was extracted twice with EtOAc. The aqueous phase was concentrated and purified by preparative HPLC.
(2S)-2-[[(1R)-1-carboxyethyl]-(9H-fluoren-9-ylmethoxycarbonyl)amino]-3-(1H-imidazol-4-yl)propanoic acid (Fmoc-(1R,2S)-2a)
The above protocol performed with mODH-582 afforded Fmoc-(1 R ,2 S )- 2a as a white powder (yield = 20%, 51 mg).
1 H-NMR (400 MHz, D 2 O + TFA): δ ppm 8.38/8.31 (d/d, J = 1.3/1.4 Hz, 1 H), 7.74–7.26 ( m , 8 H), 6.81/6.14 (s/s, 1 H), 5.02–4.56 ( m , 2 H), 4.25/3.81 (dd/dd, J = 8.2/8.9, 6.0/6.2 Hz, 1 H), 4.20/4.13 (bs/bs, 1 H), 4.13/3.70 (q/q, J = 14.2/14.3, 7.1/7.2 Hz, 1 H), 3.17–1.96 ( m , 2 H), 0.98/0.47 (d/d, J = 7.2/7.1 Hz, 3 H).
13 C-NMR (100 MHz, D 2 O + TFA): δ ppm 174.9, 174.6, 173.2, 172.3, 163.3, 162.9, 162.6, 162.2, 156.3, 155.9, 143.7, 143.6, 143.5, 143.4, 141.4, 141.3, 141.1, 140.9, 132.7, 129.6, 129.0, 127.9, 127.8, 127.7, 127.6, 127.4, 127.3, 127.1, 124.4, 124.3, 124.2, 124.1, 120.5, 120.1, 120.0, 117.6, 116.8, 116.7, 114.7, 111.8, 67.0, 66.7, 59.2, 57.9, 55.9, 55.8, 46.8, 46.6, 24.4, 14.1, 13.7.
HRMS m/z ([M + H] + ) calcd. for C 24 H 24 N 3 O 6 450.1660 found 450.1661 ( δ = 0.22 ppm).
The above protocol performed with Ar ODH afforded Fmoc-(1 R ,2 S )- 2a as a white powder (yield = 40%, 91 mg).
The analytical data are identical to those obtained for Fmoc-(1 R ,2 S )- 2a using mODH-582.
(2S)-2-[[(1R)-1-carboxyethyl]-(9H-fluoren-9-ylmethoxycarbonyl)amino]butanedioic acid (Fmoc-(1R,2S)-3a)
The above protocol performed with mODH-582 afforded Fmoc-(1 R ,2 S )- 3a as a white powder (yield = 24%, 77 mg).
1 H-NMR (400 MHz, D 2 O): δ ppm 7.77–7.26 ( m , 8 H), 4.76–4.48 ( m , 2 H), 4.21–4.16 ( m , 1 H), 4.10/4.00 (dd/dd, J = 9.7/8.0, 4.7/6.8 Hz, 1 H), 3.90/3.74 (q/q, J = 14.0/14.1, 7.0/7.0 Hz, 1 H), 2.80–2.00 ( m , 2 H), 1.19/0.73 (d/d, J = 7.0/7.0 Hz, 3 H).
13 C-NMR (100 MHz, D 2 O): δ ppm 178.2, 177.6, 177.4, 177.3, 176.8, 175.6, 156.1, 156.1, 143.8, 143.8, 143.5, 143.4, 141.2, 141.1, 141.0, 141.0, 127.9, 127.8, 127.8, 127.8, 127.4, 127.4, 127.3, 127.2, 124.7, 124.5, 124.4, 120.2, 120.1, 120.1, 120.0, 67.0, 66.9, 64.4, 61.6, 61.4, 46.8, 46.8, 37.1, 15.1, 14.9.
HRMS m/z ([M + H] + ) calcd. for C 22 H 22 NO 8 428.1340 found 428.1342 ( δ = 0.47 ppm).
(2R)-2-[[(1R)-1-carboxyethyl]-(9H-fluoren-9-ylmethoxycarbonyl)amino]-3-sulfo-propanoic acid (Fmoc-(1R,2R)-7a)
The above protocol performed with mODH-582 afforded Fmoc-(1 R ,2 R )- 7a as a white powder (yield = 23%, 80 mg). Note that there is no switch in stereoselectivity but the substituent priority according to the CIP convention changed.
1 H-NMR (400 MHz, D 2 O): δ ppm 7.78–7.26 ( m , 8 H), 4.83–4.54 ( m , 2 H), 4.17–4.14 ( m , 1 H), 4.04/3.88 (dd/dd, J = 9.3/9.2, 3.3/3.2 Hz, 1 H), 3.81–3.74 ( m , 1 H), 3.47–2.13 ( m , 2 H), 1.28/0.83 (d/d, J = 7.0/7.0 Hz, 3 H).
13 C-NMR (100 MHz, D 2 O): δ ppm 177.9, 176.7, 175.6, 174.1, 155.9, 155.6, 143.7, 143.7, 143.5, 143.4, 141.3, 141.3, 141.1, 128.0, 127.8, 127.8, 127.4, 127.3, 127.2, 124.6, 124.4, 124.3, 124.2, 120.2, 120.2, 120.1, 120.1, 67.0, 66.9, 63.9, 63.1, 62.8, 62.1, 51.0, 50.8, 46.7, 46.6, 14.6, 14.6.
HRMS m/z ([M + NH 4 ] + ) calcd. for C 21 H 25 N 2 O 9 S 481.1275 found 481.1276 ( δ = 0.21 ppm).
Synthesis of reference materials Fmoc-(1R,2S)-2a* and Fmoc-(1S,2S)-2a
In a 100-mL round-bottom flask, methanol (40 mL) was added to l -histidine (2 mmol, 310.3 mg) and Na-acetate (4 mmol, 328.1 mg) followed by pyruvic acid (4 mmol, 278.0 μL). Then, NaCNBH 3 (4 mmol, 4 mL of 1 M solution in THF) was added dropwise at RT. The solution was stirred at RT for 20 h. After histidine appeared absent on LC–MS, 9-fluorenylmethyl chloroformate (Fmoc-Cl) (4 mmol, 1.03 g) was added, and the mixture was stirred at 60 °C. After 3 h, another batch of Fmoc-Cl (2 mmol, 515 mg) was added, and stirring was continued for 4 h at 60 °C and then for 4 days at RT. MeOH was removed under reduced pressure. The residue was partially dissolved in 1 M HCl (8 mL) and extracted with 15 mL EtOAc. Both phases contained the product diastereomers. The aqueous phase was purified on preparative HPLC. The organic phase was evaporated to dryness. The residue was dissolved in acetonitrile (8 mL) and purified on preparative HPLC. Fractions containing the corresponding diastereomers were combined and freeze dried overnight resulting in Fmoc-(1 R ,2 S )- 2a * (6%, 53 mg; * labels its chemical origin) and Fmoc-(1 S ,2 S )- 2a (3%, 30 mg).
(2S)-2-[[(1R)-1-carboxyethyl]-(9H-fluoren-9-ylmethoxycarbonyl)amino]-3-(1H-imidazol-4-yl)propanoic acid (Fmoc-(1R,2S)-2a*)
The obtained analytical data are identical to those of Fmoc-(1 R ,2 S )- 2a .
(2S)-2-[[(1R)-1-carboxyethyl]-(9H-fluoren-9-ylmethoxycarbonyl)amino]-3-(1H-imidazol-4-yl)propanoic acid (Fmoc-(1S,2S)-2a)
1 H-NMR (400 MHz, D 2 O + TFA): δ ppm 8.37/8.27 (bs/d, J = 1.2 Hz, 1 H), 7.69–7.21 ( m , 8 H), 6.88/5.84 (s/s, 1 H), 4.98–4.53 ( m , 2 H), 4.23/3.81 (dd/dd, J = 8.1/10.5, 6.1/5.8 Hz, 1 H), 4.16/4.06 (bs/bs, 1 H), 3.67–3.58 ( m , 1 H), 3.18–1.81 ( m , 2 H), 0.94/0.55 (d/d, J = 7.0/7.1 Hz, 3 H).
13 C-NMR (100 MHz, D 2 O + TFA): δ ppm 174.4, 171.7, 163.0, 162.6, 156.0, 155.9, 143.8, 143.7, 143.4, 141.5, 141.2, 140.8, 132.9, 132.8, 129.1, 128.5, 128.0, 127.8, 127.7, 127.7, 127.4, 127.3, 127.2, 124.3, 124.2, 120.5, 120.2, 120.0, 117.6, 116.9, 114.7, 111.8, 66.7, 66.6, 60.1, 59.9, 58.6, 56.5, 46.8, 46.6, 24.5, 24.3, 14.1, 13.2.
HRMS m/z ([M + H] + ) calcd. for C 24 H 24 N 3 O 6 450.1660 found 450.1660 ( δ = 0.00 ppm). | Results
Metagenome mining of ODHs
We started with building a database containing ODH sequences that can be used as templates for metagenome search. We mined the literature for enzymes with known primary structure and confirmed activity to avoid falsely annotated sequences. In the end, we have collected 11 sequences from a diverse set of organisms and with wide substrate and cofactor preferences (see Fig. 1 and Table S1 ). These were used to construct a multiple sequence alignment to train a hidden Markov model (HMM), as described in a recent publication (Szalkai and Grolmusz 2019 ). In order to find enzymes with preferable properties for industrial biocatalysis, we selected datasets from NCBI Short Read Archive collected from extreme environments (see “Materials and Methods”). The trained HMM was used to extract full-length sequences resembling the general features of ODHs. From the four different datasets (see “Materials and Methods”), only one, the Sativali hot spring (67 °C) (Narsing Rao et al. 2022 ), metagenome provided new putative ODH genes. From that dataset, 11 sequences were obtained that showed remarkably low sequence identity (< 40%) with all template sequences used while displaying higher degree of similarity among each other (Fig. 1 A and B). Interestingly, although the HMM included characteristics from all the 11 diverse ODHs, still, for all of the finally identified mODHs, the closest homologue in the template set is the ODH from Arthrobacter sp. 1C ( Ar ODH). Since Ar ODH is the only known example of an ODH used in biocatalysis, we decided to use it as a reference enzyme for the characterization of the metagenomic ODHs (mODHs) as potential new biocatalysts. From the 11 sequences, we have discarded two due to the lack of a consensus sequence required for cofactor binding at the front of these genes (see Table S2 ). Thus, we have ordered ten synthetic genes (nine mODHs and Ar ODH), cloned into pET14b vector, expressed them in E. coli and purified them. Three metagenomic enzymes did not exhibit soluble expression in our system, so in the end, six mODHs and Ar ODH were obtained with yields between 10 and 70 mg/L culture. Those metagenomic enzymes have close to 40% sequence identity to Ar ODH but have varying degree of similarity (~ 45–80%) among each other (Fig. 1 C).
Substrate screen for mODHs
As the main aim of our study was to explore and exploit the synthetic utility of the newly discovered enzymes, we have tested their activity in small scale biocatalytic reactions (Scheme 1 ), where their activity is measured by the apparent conversion to the product. The reactions were followed by HPLC with UV and MS detection. Since most of the used substrates had poor retention on normal phase and weak UV signal, we have applied a derivatization method (Tassano et al. 2022 ) to follow the consumption of the amino acid substrate (see “Materials and Methods”). The presence of the product could be confirmed based on its MS signal. Preliminary experiments showed that mODHs have negligible activity with the best amino acid substrates of Ar ODH, like l -norvaline or l -phenylalanine and only few percent conversion with l -alanine when using NADH, the usual cofactor for Ar ODH. In the latter case, the observable product formation prompted us to test NADPH instead of NADH as cofactor, which yielded enhanced conversions (data not shown). Therefore, NADPH was used for the optimization of reaction conditions and substrate screening. Next, after screening a restricted set of natural amino acids including apolar (alanine, leucine, valine, methinonine), polar (serine, threonine, asparagine, glutamine), and charged ones (aspartate, glutamate), we have identified l -aspartate ( 3 ) to be a preferred substrate for mODHs. Pyruvate ( a ) is often the only ketoacid substrate of ODHs; therefore, we did not consider changing that substrate at this stage. We used the l -aspartate ( 3 ) and pyruvate ( a ) substrate combination to optimize reaction conditions for extensive substrate screening using two selected best performing enzymes (mODH-45 and mODH-49). The optimum conditions are different from the ones normally used for Ar ODH with higher temperature (30 vs. 40 °C) and slightly higher pH (7.5 vs. 8.0) (Kato et al. 1996 ) (Scheme 1 ).
Using these conditions, we set out to investigate the activity of mODHs on a diverse set of amino acid and ketoacid substrates. Amino acids included 13 canonical ( 1–4 , 13–21 ) and 13 non-canonical amino acids ( 5–12 , 22–26 ), while on the ketoacid side, only natural substrates were tested (see Tables S3 and S4 ). The screening results show that in contrast to Ar ODH’s preference for apolar amino acid substrates, mODHs exhibit the highest conversions with polar amino acids, mostly with negatively charged sidechains (Fig. 2 ). l -Aspartate ( 3 ) and l -glutamate ( 4 ) clearly stand out among the natural amino acids with four of six mODHs exhibiting 100% conversion with the former and above 50% with the latter. Exchanging the carboxylate with a sulfonate group ( l -cysteic acid ( 7 ) or homocysteic acid ( 9 )) results in similar or even better substrates for mODHs. Notably, phosphorylated amino acids ( 5 , 6 ) are converted significantly less efficiently, even though mODHs are still superior to Ar ODH in these reactions. Aromatic side chains are generally not well tolerated by mODHs; however, l -histidine ( 2 ) is transformed with conversions comparable to Ar ODH, while mODH-48 is also capable of transforming 4-carboxy- l -phenylalanine ( 8 ) with moderate conversion. Opine dehydrogenases have strong enantiomer specificity towards l -amino acids (Telek et al. 2023 ). mODHs share this feature as we could demonstrate by comparing conversions with d - and l -aspartate (data not shown). This strong specificity can explain the moderate conversions with racemic amino acids (homocysteic acid ( 9 ), 2-aminopimelic acid ( 10 ), homoserine ( 11 )) that otherwise proved to be good substrates of mODHs. Among ketoacids, pyruvate ( a ) is clearly preferred above all others. α-Ketobutyrate ( c ) is accepted to some extent, while conversions with glyoxylate ( b ) and α-ketoglutarate ( d ) are negligible (except for mODH-47).
Kinetic characterization of mODHs
After testing the mODHs in small-scale biocatalytic reactions, we wanted to gain insight into their kinetic properties that govern the above-described apparent catalytic activities. We have performed Michaelis–Menten kinetic studies to assess the amino acid substrate kinetics and determined kinetic constants for NADH/NADPH (Table 1 ). The mODHs show similar characteristics in their turnover numbers ( k cat ); the differences between their catalytic efficiencies can be attributed to their K M values which can deviate more than an order of magnitude from each other. The cofactor kinetics corroborate our assumption of NADPH preference for most of mODHs with the exception of mODH-48 which shows similar catalytic efficiency with NADH and mODH-582 which appears to prefer NADH. In the latter case, the catalytic efficiency with NADPH is still comparable to the other mODHs.
Heat stability of mODHs
The mODH sequences have been identified from the metagenomic data that were collected for microbial communities living under extreme conditions such as in hot springs. It was therefore of interest to decide if their protein stability may reflect adaptation to the extreme conditions. We applied differential scanning fluorimetry (Niesen et al. 2007 ; Gao et al. 2020 ) to determine the melting point of these mODH proteins. As also shown in Table 2 , all mODHs that we purified showed drastic increase (15–30 °C) in thermal stability as compared to the model Ar ODH (see also Figure S2 for raw melting curve data). This result is encouraging for enzyme stability under chemical process conditions.
In addition, we have performed heat stability experiments by incubating the enzymes at varying temperatures before measuring their specific activities. From the residual activities, it is clear that while Ar ODH loses activity at 50 °C, the metagenomic enzymes show moderate or no loss of activity even at 60 °C (Fig. 3 ). This result indicates that the metagenomic enzymes derived from a hot spring have not only higher melting temperatures (see above) but they can also retain their activity after exposure to higher temperatures.
Molecular determinants of substrate preference
In order to better understand the factors determining the different amino acid substrate preferences of ODHs, we performed detailed analysis of their sequences and structures. For the latter structures, models predicted by AlphaFold were used. The overall structure of mODHs resembles well other opine dehydrogenases even with very low sequence identity (Fig. 4 ). The structural family of enzymes sharing this overall fold is named octopine dehydrogenases (Pfam 02317) after the first and most well-characterized enzyme from Pecten maximus ( Pm OcDH) (Smits et al. 2008 ) and octopine synthases (OCS) from naturally transgenic plants (Hack and Kemp 1980 ; Matveeva and Otten 2021 ). Ar ODH is also part of this structural class as outlined by several publications (Britton et al. 1998 ; Smits et al. 2008 ; Sharma et al. 2017 ). A general feature of this class is the two-domain structure, an NAD(P)H-binding domain at the N-terminus and a substrate binding domain at the C-terminus. The two domains form a cleft together, wherein catalysis can take place upon closure of the cleft by domain motions (Fig. 4 A). On the first domain, a key structural motif is the Rossmann-fold helix (GxGxxG/A) responsible for cofactor binding, widely shared among NAD(P) + -dependent dehydrogenases. The substrate binding domain contains a few residues that are conserved among ODHs from any organism, such as a catalytic histidine-aspartate dyad or a key tryptophan residue in the active site (Fig. 4 B). Based on these similarities, it can be assumed that findings describing the general mode of action of ODHs (Smits et al. 2010 ; Sharma et al. 2017 ; McFarlane et al. 2018 ) are applicable to the new mODHs as well.
However, the factors determining substrate specificity were still unclear, so we set out to investigate that by bioinformatic as well as experimental means. Comparing active site residues corresponding to those mutated by Codexis (see Table S5 ), mostly only subtle differences appear between Ar ODH and mODHs. A notable exception is the position 111 in Ar ODH, where an alanine is replaced by an arginine in all mODHs. We hypothesized that this change can be responsible for the substrate preference of mODHs towards amino acids with negatively charged side chains. Therefore, we performed mutagenesis on mODH-582 to have the variant R110A. We also created a “control mutant” Ar ODH A111R. Testing the activity of these variants on l -aspartate and l -phenylalanine and comparing them to the wild-type enzymes revealed loss of activity with the original substrates, but no gain of activity towards the other (Table S7 ). This result prompted us to further inspect the active site of ODHs, and we realized that the position 198 might be spatially correlated with the position 111. ArODH has an asparagine opposite to A111 while all mODHs possess glycine in that position leaving more space to the longer arginine side chain. Therefore, we prepared double mutant variants mODH-582 R110A G198N and Ar ODH A111R N198G. We tested these variants with l -aspartate and l -phenylalanine and observed a complete switch of substrate preference (Fig. 4 C). We carried out the kinetic characterization of these variants and compared the results to the wild-type enzymes (Table 3 ). Here, we observed no apparent reaction with the non-preferred substrates; thus, the change in substrate preference could be clearly demonstrated.
In addition to the site-directed mutagenesis studies, we decided to also take a holistic approach towards rationalizing the substrate preference of ODHs. We performed APBS (Adaptive Poisson-Boltzmann Solver) calculations to assess the electrostatic surface potential of mODHs. In accordance with our experimental results showing a preference of mODHs for negatively charged substrates, all mODHs show a remarkable accumulation of positive surface charge inside their active sites in strong contrast to the close to neutral surface potential of the Ar ODH active site (Fig. 5 B and C, Figure S3 ). To follow up on this result, we wanted to identify specific residues that could be responsible for this charge accumulation inside the cleft between the two domains. To this end, we identified the residues lining the walls of this cleft using the Caver Web tool of Loschmidt Laboratories (Stourac et al. 2019 ). Then, we reduced a multiple sequence alignment of Ar ODH and mODHs to only contain these residues (Fig. 5 A). Interestingly, we could not identify large differences in terms of charged residues. The only position, which contains a neutral to positive change for all mODHs, is the above-mentioned position 111 of Ar ODH. There are several other positions, however, that show positive charge changes (i.e., negative to neutral or neutral to positive), but not for all mODHs: 14 (N to H), 34 (D to N/S), 36 (D to T/S/F), 108 (N to H), 154 (A to R), 157 (G to R), 160 (D to P), 253 (P to R), 261 (E to M/Y/L/K), 284 (A to K), 287 (I to K), and 291 (T to H) ( Ar ODH numbering is used) (exemplary cases shown on Fig. 5 D and E).
Preparative scale transformations and diastereoselectivity
After establishing the substrate preference of the newly discovered mODHs, we wanted to assess the diastereoselectivity of these enzymes and demonstrate their applicability in preparative scale biocatalysis. First, 100 mg scale transformations were conducted on l -histidine ( 2 ) with pyruvate ( a ) using mODH-582 and Ar ODH as this amino acid was accepted by both enzymes to a similar extent. The reactions were run until complete amino acid depletion was observed followed by our Fmoc-derivatization protocol. However, 2a has poor retention on reverse-phase HPLC and gives low UV signal. Our attempts to use ion-exchange chromatography for product isolation failed, and derivatization was still necessary, but the opine-type secondary amines do not react with Fmoc-Cl under the conditions used for derivatization. Therefore, we scaled up and modified the protocol to yield better conversion to the Fmoc derivative of 2a (Fmoc- 2a ). This compound was isolated from both reactions with yields 40% for Ar ODH and 20% for mODH. Meanwhile, we have also prepared reference compounds (Fmoc-(1 R ,2 S )- 2a * and Fmoc-(1 S ,2 S )- 2a ) by chemical reductive amination of pyruvate ( a ) with l -histidine ( 2 ) (see “Materials and Methods”). The resulting diastereomers were isolated, and their NMR spectra were compared with the enzymatic products (Figure S4 ). Fmoc-(1 R ,2 S )- 2a * and Fmoc-(1 S ,2 S )- 2a give distinguishable signals, and the enzymatic products clearly correspond to Fmoc-(1 R ,2 S )- 2a *. In addition, comparison of the retention times of Fmoc- 2a products of chemical and enzymatic origin also supports identical selectivity of Ar ODH and mODH-582 (Figure S5 ). Since the R-selectivity of Ar ODH is already well established (Asano et al. 1989 ), we concluded that mODH-582 has also R-stereoselectivity in the reductive amination. The diastereoselectivity of the rest of the mODHs was found identical to mODH-582 and Ar ODH by performing analytical scale reactions with l -histidine ( 2 ) and comparing the LC chromatograms after Fmoc-derivatization (Figure S6 ). We have also carried out preparative scale transformations with mODH-582 using the best substrates of mODHs, l -aspartate ( 3 ), and l -cysteic acid ( 4 ) with pyruvate ( a ) as ketoacid partner. The enzymatic reactions reached > 99% conversion, and the products were isolated as their Fmoc derivatives (24% for Fmoc-(1 R ,2 S )- 3a , 23% for Fmoc-(1 R ,2 R )- 7a , Scheme 2 ). Here, the configuration of the new stereocenter is assumed to be R based on the diastereoselectivity observed with l -histidine.
While scaling up the reactions with the preferred substrates, we have revisited the question of cofactor preference, which was only determined before in preliminary experiments with l -alanine and later corroborated by kinetic measurements (see above), but not in biocatalytic reactions. To our surprise, with l -aspartate, some enzymes (mODH-48, mODH-49, mODH-55, and mODH-582) have exhibited better conversions using NADH as cofactor (see Table S8 ). This result shows that the better kinetic parameters not necessarily translate into higher conversion values under biocatalytic reaction conditions. It also suggests that the cofactor preference of some mODHs might be substrate dependent. | Discussion
Opine dehydrogenases are a diverse class of enzymes specialized to perform wide ranging physiological roles in organisms across the tree of life. Substrate specificities of each enzyme have evolved to perform a certain physiological function in the given organism. Mining opine dehydrogenases from metagenomes of extreme environments allows for identification of new enzymes with altered substrate specificities that could evolve under significantly different evolutionary pressure. In fact, the metagenomic opine dehydrogenases (mODHs) discovered within the framework of this study appear to be from an evolutionarily distant subclass of ODHs (Fig. 1 ) that was previously not described. Sequence identity to closest homologues deposited in UniProtKB varies between 62 and 99%, and all of them are unreviewed, while mODH-48 is identical to an obsolete entry in UniParc (Table S2 ). Their overall structure—as predicted by AlphaFold—resembles well other opine dehydrogenases even with very low sequence identity. However, they exhibit unique properties that make them a valuable addition to this underexplored enzyme class. Being derived from a hot spring metagenome, they show increased thermotolerance which is indicated by higher melting temperatures, resistance to heat inactivation, and increased optimum temperature. Moreover, their substrate preference towards negatively charged amino acid substrates is so far unprecedented for ODHs. The molecular background of the different amino acid substrate specificities of ODHs has not yet been elucidated to date. Several residues have been proposed to form key interactions in the active site with the amino acid substrate (Smits et al. 2008 ; McFarlane et al. 2018 ), but there was no indication that the amino acid preference would be tuneable by manipulating any of these residues. In this study, we demonstrated the switch of the amino acid preference in two investigated ODHs by mutating two spatially correlated positions. In addition to these two residues that directly influence substrate specificity, we have also considered global factors that can contribute to the preference of mODHs towards negatively charged amino acids. We have found that there is a significant accumulation of positive surface charge in the mODHs active site compared to the neutral surface of Ar ODH as revealed by APBS electrostatic potential calculations.
We have thoroughly analyzed the positions that can be responsible for this phenomenon and found that these positions are located mostly at the edges of the catalytic cleft rather than inside, where the catalysis takes place and negative charge changes can also be pinpointed. Overall, we can hypothesize that the preference of mODHs towards negatively charged substrates is aided by this positively charged microenvironment inside the catalytic pocket. The fact that substitutions generating this charged microenvironment are not identical for all mODHs suggests that they might have evolved independently from a common ancestor to perform the same or similar functions.
The newly discovered opine dehydrogenases have a clear preference for l -amino acids in line with previous observations (Telek et al. 2023 ). The diastereoselectivity (dictated by the stereoselectivity in the formation of the new stereocenter) also follows the general trend of ODHs, i.e., R-selectivity can be assumed in the reductive amination step based on experimental results with l -histidine.
The kinetic parameters of mODHs also align well with previous reports (Telek et al. 2023 ) even though their catalytic efficiency compared to ArODH (which is at the higher end of the reported values) was found to be lower (Table 1 and 3 ). This might be an adverse effect of their increased stability which can compromise flexibility reflected in lower k cat values. The latter parameter could be targeted by enzyme engineering which would allow the exploitation of their superior thermal stability and distant substrate preference. However, it is worth mentioning that the better catalytic efficiency observed in kinetic measurements does not necessarily translate into higher conversions in biocatalytic reactions. One example is mODH-55 which shows superior kinetic parameters to most of other mODHs but fails to reach > 95% conversion with l -aspartate like others (Fig. 2 and Table 2 ). These discrepancies between kinetic measurements and biocatalytic reactions might be attributed to several factors (e.g., long-term enzyme stability and product inhibition) that can influence an enzyme’s stability in an overnight reaction in contrast to a quick kinetic measurement. These factors might also have an effect on the cofactor preference of these enzymes that is somewhat contradictory. In biocatalytic reactions with l -alanine, NADPH has proved to be preferred, and this preliminary observation was also corroborated by kinetic data measured with the best substrate l -aspartate (Table 2 ). A notable exception was mODH-582 that showed clear NADH preference while mODH-48 appeared to have no preference. However, in biocatalytic reactions, four of six enzymes performed better with NADH (see Table S8 ). The cofactor preferences revealed by the kinetic data can be explained by examining the cofactor binding site of the enzymes. Position 35 ( Ar ODH numbering) has already been proposed to determine cofactor preference of ODHs in pathogenic bacteria (Laffont et al. 2019 ), where an arginine in that position ensures preferential binding of NADPH. In Ar ODH and mODH-582 that show clear NADH preference, aliphatic amino acids can be found in this position (alanine and isoleucine, respectively), while all other mODHs possess an arginine there (Fig. 6 ). This suggests that an arginine in position 35 is a good indicator that NADPH is accepted as a cofactor for the enzyme even if a clear preference towards it is not always observed. However, since these predictions are not necessarily translatable to behaviors in biocatalytic reactions (especially with different substrates), for further experiments on these enzymes, we suggest that cofactor preference should be established for each enzyme using the particular conditions and substrates for any future development.
In conclusion, with the discovery of metagenomic opine dehydrogenases (mODHs), we were able to add a new subclass to the opine dehydrogenase family with so far unprecedented substrate specificity. The newly identified enzymes have a strong preference for polar amino acids, especially those with negatively charged side chain. This preference is governed by two spatially correlated positions that can be used to switch substrate specificity of ODHs by site-directed mutagenesis. The preference towards negatively charged amino acids can be further rationalized by an overall positively charged surface area within the active site. Importantly, the six mODHs display higher melting temperature and better resistance to heat inactivation compared to the so far best characterized enzyme ( Ar ODH) which is a promising property from an industrial perspective. Their enantio- and diastereoselectivity are proposed to be identical to known ODHs (preference for l -amino acids and R-selectivity on the newly form C-N bond). Overall, these enzymes offer good starting points for biocatalytic applications aiming at the synthesis of highly functionalized peptidomimetic building blocks. | Abstract
Enzymatic processes play an increasing role in synthetic organic chemistry which requires the access to a broad and diverse set of enzymes. Metagenome mining is a valuable and efficient way to discover novel enzymes with unique properties for biotechnological applications. Here, we report the discovery and biocatalytic characterization of six novel metagenomic opine dehydrogenases from a hot spring environment (mODHs) (EC 1.5.1.X). These enzymes catalyze the asymmetric reductive amination between an amino acid and a keto acid resulting in opines which have defined biochemical roles and represent promising building blocks for pharmaceutical applications. The newly identified enzymes exhibit unique substrate specificity and higher thermostability compared to known examples. The feature that they preferably utilize negatively charged polar amino acids is so far unprecedented for opine dehydrogenases. We have identified two spatially correlated positions in their active sites that govern this substrate specificity and demonstrated a switch of substrate preference by site-directed mutagenesis. While they still suffer from a relatively narrow substrate scope, their enhanced thermostability and the orthogonality of their substrate preference make them a valuable addition to the toolbox of enzymes for reductive aminations. Importantly, enzymatic reductive aminations with highly polar amines are very rare in the literature. Thus, the preparative-scale enzymatic production, purification, and characterization of three highly functionalized chiral secondary amines lend a special significance to our work in filling this gap.
Key points
• Six new opine dehydrogenases have been discovered from a hot spring metagenome
• The newly identified enzymes display a unique substrate scope
• Substrate specificity is governed by two correlated active-site residues
Supplementary Information
The online version contains supplementary material available at 10.1007/s00253-023-12871-z.
Keywords
Open access funding provided by Budapest University of Technology and Economics. | Supplementary Information
Below is the link to the electronic supplementary material. | Acknowledgements
The authors thank the Analytical Department of SRIMC for the analytical development and the characterization of the compounds.
Author contribution
AT, ZM, GT, and BGV conceived and designed research, analyzed data, and wrote the paper. KT, BV, and VG contributed new methods. AT, GT, KT, BV, and VG conducted experiments.
Funding
Open access funding provided by Budapest University of Technology and Economics. Supported by the National Research, Development and Innovation Office of Hungary (K119493, K135231, VEKOP-2.3.2–16-2017–00013 to BGV, NKP-2018–1.2.1-NKP-2018–00005), and the TKP2021-EGA-02 grant, implemented with the support provided by the Ministry for Innovation and Technology of Hungary from the National Research, Development and Innovation Fund. The support of ÚNKP-22–4-II-BME-158 (ZM) has been acknowledged. The research has been supported by Project RRF-2.3.1–21-2022–000 15 (ZM), which has been implemented with the support provided by the European Union. Project no. C1580174 has been implemented with the support provided by the Ministry of Culture and Innovation of Hungary from the National Research, Development and Innovation Fund, financed under the NVKDP-2021 funding scheme (AT).
KT, BV, and VG were partially supported by the Ministry of Innovation and Technology of Hungary from the National Research, Development and Innovation Fund, financed under the ELTE TKP 2021-NKTA-62 funding scheme. KT, BV, VG and BGV were partially funded by the National Research, Development and Innovation Fund under the contract 2022–1.2.2-TÉT-IPARI-UZ-2022–00003.
Data availability
Most data generated or analyzed during this study are included in this published article (and its supplementary information file).
Declarations
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:53 | Appl Microbiol Biotechnol. 2024 Jan 13; 108(1):1-16 | oa_package/86/58/PMC10787698.tar.gz |
|
PMC10787699 | 37855866 | Introduction
The brain is metabolically highly active and uses around 20% of the glucose and oxygen consumed by the body [ 1 , 2 ]. The oxygen consumed in brain is needed mainly for ATP regeneration by mitochondrial oxidative phosphorylation as this process provides the large amounts of metabolic energy required to fuel the information transfer between neurons [ 3 ]. Nevertheless, astrocytes which are essential partners of neurons in brain [ 4 – 6 ], also need substantial amounts of ATP for their many essential functions. Important ATP-consuming astrocytic processes are the maintenance of the cellular membrane potential [ 7 , 8 ], neurotransmitter uptake [ 9 ], glutamine synthetase-catalyzed amidation of glutamate to glutamine [ 10 ], ATP-driven export processes [ 11 ] as well as the synthesis of energy stores such as glycogen [ 12 ] and fatty acids [ 2 , 13 ]. The synthesis and continuous availability of high ATP levels are highly important to fuel all astrocytic contributions to the normal brain functions.
ATP is regenerated in astrocytes by phosphorylation of ADP mainly by cytosolic glycolysis and by mitochondrial oxidative phosphorylation [ 14 , 15 ]. Both processes are used for ATP regeneration in astrocytes, and each appears to have sufficient capacity to, at least in part, compensate for an impairment of the other one. This is demonstrated by the surprising ability of cultured astrocytes to maintain a high cellular ATP concentration after glucose-deprivation or after treatment with a mitochondrial inhibitor in the presence of glucose [ 15 ] and to survive an impairment of the respiratory chain as glycolytic cells [ 16 , 17 ]. However, astrocytes are rapidly depleted of their cellular ATP, if both the glycolytic substrate chain phosphorylation plus the mitochondrial oxidative phosphorylation are impaired [ 15 , 18 – 22 ].
Cultured astrocytes contain a high cellular ATP concentration of around 7 mM [ 15 ]. In contrast, such cultures contain only low amounts of ADP and AMP [ 23 – 27 ]. This leads to high values for the calculated adenylate energy charge (AEC: ([ATP] + 0.5 [ADP])/([ATP] + [ADP] + [AMP])) [ 28 , 29 ] of around 0.9 in untreated cultured astrocytes [ 24 , 26 , 27 ].
Creatine phosphate (CrP) serves in many cell types as rapidly available and quickly mobilizable buffer of bound high energy-rich phosphate groups that has the exclusive function of supporting rapid phosphorylation of ADP to ATP via creatine kinase (CrK) [ 30 – 33 ]. Cellular CrP is generated by phosphorylation of creatine by CrK which is present in high specific activity in cultured astrocytes [ 34 ]. The cellular creatine that serves as substrate of CrK can be synthesized in astrocytes from the precursor amino acids glycine, arginine and methionine [ 35 , 36 ] or is derived from uptake of exogenous creatine [ 37 ]. Due to the high AEC [ 24 , 26 , 27 ] and the high activity of CrK [ 34 ] most of the cellular creatine should be present in astrocytes as CrP. Indeed, cultured astrocytes contain specific CrP levels which have been reported to be similar or even higher than those reported for ATP [ 38 – 41 ]. To our knowledge only few studies have reported conditions that modulate CrP levels in astrocytes. Under ischemic conditions, both ATP and CrP levels decline in cultured astrocytes with similar velocity, reaching 50% of the initial values after around 6 h of incubation [ 40 ]. Interestingly, exposure of astrocytes to ammonia or octanoate [ 39 ] or to glutamate [ 42 ] lowers the cellular CrP levels, while the high cellular ATP contents remains unaltered. These results support the view that CrP serves also in astrocytes as temporary energy buffer to prevent cellular ATP depletion.
In order to investigate the interplay between different adenosine phosphates and CrP in astrocytes, we have investigated the basal contents of adenosine phosphates and CrP in cultured astrocytes and have explored treatments which may affect those levels. Here we report that the specific CrP contents, but not the ATP contents, as well as the ratio of CrP/ATP decline with the age of astrocyte cultures, while supplementation of the medium with creatine almost doubled CrP contents and CrP/ATP ratio. In addition, impairment of glycolysis by 2-deoxyglucose and of oxidative phosphorylation by antimycin A caused a rapid decline in cellular CrP levels that preceded the decline in ATP contents. In contrast, for glucose-fed astrocytes antimycin A hardly affected the high cellular ATP content but lowered the CrP level severely during a 30 min incubation. These data demonstrate the importance of CrP in astrocytes as rapidly mobilizable energy buffer that helps to maintain a high cellular ATP concentration, especially during episodes of impaired mitochondrial ATP production. | Materials and Methods
Materials
Sterile cell culture materials, unsterile 96-well plates and black microtiter plates were purchased from Sarstedt (Nümbrecht, Germany). Fetal calf serum (FCS), adenylate kinase, AMP, antimycin A, creatine, creatine kinase and 2-deoxyglucose (2DG) were obtained from Sigma-Aldrich (Steinheim, Germany; RRID:SCR_008988). ADP, Dulbecco’s modified Eagles medium (DMEM with 25 mM glucose, catalog number 52100-021) and penicillin G/streptomycin sulfate solution were from Thermo Fisher Scientific (Schwerte, Germany). The Cell Titer Glo ® 2.0 ATP Assay Kit was purchased from Promega (Walldorf, Germany; RRID:SCR_006724). Bovine serum albumin, dimethyl sulfoxide (DMSO) and perchloric acid were from AppliChem (Darmstadt, Germany; RRID:SRC_005814). ATP and pyruvate kinase were purchased from Roche Diagnostics (Mannheim, Germany; RRID:SCR_001326). All other basal chemicals were obtained from Sigma-Aldrich (Steinheim, Germany), Roth (Karlsruhe, Germany), Riedel-de Haën (Seelze, Germany) or Fluka (Buchs, Switzerland).
Astrocyte Primary Cultures
Wistar rats were obtained from Charles River Laboratories (Sulzfeld, Germany; RRID:SCR_003792). Animals were treated in accordance to the State of Bremen, German and European animal welfare acts. Primary astrocyte cultures were prepared from the brains of newborn rats as previously described in detail [ 43 ]. Of the harvested cell suspension, 300,000 viable cells were seeded per well of 24-well dishes in 1 mL culture medium (90% DMEM containing 25 mM glucose, 44.6 mM sodium bicarbonate, 1 mM pyruvate, 20 U/mL penicillin G, 20 μg/mL streptomycin sulfate, supplemented with 10% FCS). The cultures established from the seeded cells remained in the wells without sub-culturing in the humidified atmosphere a Sanyo CO 2 incubator (Osaka, Japan) containing 10% CO 2 . The culture medium was renewed every seventh day and one day prior to experiments. For the current study, confluent primary astrocyte cultures of an age between 14 and 28 days after seeding were used. Astrocyte-rich primary cultures were frequently characterized by immunocytochemical staining for cells positive for the astrocyte marker protein glial fibrillary acidic protein. These cultures are strongly enriched in astrocytes and contain only low numbers of contaminating other types of glial cells [ 43 – 45 ].
Experimental Incubation of Astrocytes
To test for the consequences of a preincubation of astrocytes with serum or creatine, the culture medium of astrocyte primary cultures in wells of 24-well dishes was replaced by 1 mL DMEM (containing 25 mM glucose, 44.6 mM sodium bicarbonate, 1 mM pyruvate, 20 U/mL penicillin G, 20 μg/mL streptomycin sulfate) that had been supplemented with or without 10% FCS and/or 1 mM creatine. After 24 h incubation at 37 °C in the humidified atmosphere of an incubator with 10% CO 2 supply, the cells were washed twice with 1 mL ice-cold (4 °C) phosphate-buffered saline (PBS; 10 mM potassium phosphate buffer pH 7.4 containing 150 mM NaCl) and lysed for quantification of adenosine phosphates and CrP.
To test for the short-time consequences of glucose deprivation and/or the application of metabolic inhibitors, astrocyte primary cultures in wells of 24-well dishes were washed twice with 1 mL pre-warmed (37 °C) glucose-free incubation buffer (IB; 145 mM NaCl, 20 mM HEPES, 5.4 mM KCl, 1.8 mM CaCl 2 , 1 mM MgCl 2 , 0.8 mM Na 2 HPO 4 , pH adjusted with NaOH to 7.4 at 37 °C) and subsequently incubated for up to 30 min at 37 °C in the humidified atmosphere of a CO 2 -free incubator in 250 μL glucose-free IB that had been supplemented with the given substrates and/or inhibitors. After the incubation periods given, the incubation media were harvested to test for potential cell damage by measuring the activity of extracellular lactate dehydrogenase (LDH), while the cells were washed twice with 1 mL ice-cold (4 °C) PBS and lysed for quantification of adenosine phosphates and CrP.
Determination of Cellular Contents of Adenosine Phosphates and Creatine Phosphate
Perchlorate lysates of cultured astrocytes were used to quantify the cellular contents of adenosine phosphates and CrP. Briefly, the cultures were washed twice with 1 mL ice-cold (4 °C) PBS and lysed in 400 μL of ice-cold 0.5 M HClO 4 on ice for 5 min. The lysates were collected and 2 M KOH was added to neutralize the lysates. After centrifugation at 12,100×g for 5 min to precipitate KClO 4 , the supernatant was harvested for quantification of adenosine phosphates and CrP. ATP in the lysate was determined as recently described [ 15 , 46 ] using a luciferine-luciferase-based luminometric assay.
ADP, AMP and CrP in the lysates were converted by enzymatic reactions to ATP that was subsequently quantified by the luminometric ATP assay. To convert CrP to ATP in astrocyte lysates, a reported method [ 47 ] was adapted. Ten μL neutralized lysate was diluted with 190 μL 70 mM Tris/acetate buffer pH 7.75 and 50 μL reaction mixture (3 μM ADP, 1 mM MgCl 2 in 70 mM Tris/acetate buffer pH 7.75 without (only for ATP quantification) or with 5 U creatine kinase/50 μL (quantification of CrP plus ATP)) were added. After 60 min incubation at 37 °C, the phosphate group of CrP had been completely transferred to ADP to generate ATP.
To convert AMP and/or ADP into ATP in astrocyte lysates, a reported method [ 48 ] was adapted. Ten μL sample of the neutralized 400 μL lysate was diluted with 40 μL 70 mM Tris/acetate buffer pH 7.75 and subsequently with additional 50 μL of reaction mixtures that contained 3 mM phosphoenolpyruvate, 20 mM MgCl 2 , and 70 mM Tris/acetate buffer pH 7.75 without (for ATP quantification) or with 2 U/50 μL pyruvate kinase (for quantification of ADP plus ATP) or 2 U/50 μL pyruvate kinase plus 9 U/50 μL adenylate kinase (for quantification of AMP plus ADP plus ATP). After 45 min incubation at room temperature ADP and AMP had been completely phosphorylated to ATP. Application of additional ATP for AMP phosphorylation by adenylate kinase plus pyruvate kinase is not required as already minute amounts in the lysates of initial ATP or of ATP generated from the initial ADP by pyruvate kinase are sufficient to convert AMP quantitatively to ATP under the assay conditions used (data not shown).
Finally, 50 μL of the neutralised lysate samples or ATP standards (ATP quantification) or 50 μL of the reactions used to convert the molecules of interest into ATP (see above) were diluted in wells of a black 96-well plate with 50 μL of the ATP detection reagent (Cell Titer Glo ® 2.0 ATP Assay Kit) to start the luciferase reaction. After 30 min of incubation, the luminescence signal was recorded by a Fluoroskan Ascent FL chemiluminescence plate reader (Thermo Fisher Scientific, Bremen, Germany). ATP values for the samples were calculated by using the linear calibration curve generated from the values obtained for the ATP standards (0–1000 nM to measure CrP and ATP and 0–2000 nM to measure ATP, ADP and AMP). The amounts of CrP, ADP and AMP were calculated by subtracting from the total ATP content determined per well (cellular ATP plus ATP generated by enzymatic conversion of AMP, ADP or CrP to ATP) the initial ATP contents (for CrP and ADP quantification) or the ATP plus ADP contents (for AMP quantification). Specific contents were calculated by normalizing the respective values determined to the initial cellular protein content of the cultures.
Determination of Cell Viability and Protein Content
The extracellular activity of LDH in 10 μL media samples harvested after a given incubation was compared with the initial cellular LDH activity to test for potential toxicity as previously described [ 43 ]. The initial cellular protein content per well was determined by the Lowry method [ 49 ] using bovine serum albumin as standard protein.
Data Presentation and Statistical Analysis
The data shown in figures and tables represent means ± standard deviations (SDs) of values obtained from three or more experiments that were each performed in duplicates on independently prepared astrocyte cultures. For data sets derived from at least 5 independent experiments normality distribution was tested for by the Kolmogorov–Smirnov test. For data from less than 5 independent experiments statistical analysis was done under the assumption of normal distribution. Analysis for statistical significance between groups of data was performed by ANOVA followed by the Bonferroni post-hoc test and the calculated level of significance compared to the indicated control condition is given by *p < 0.05, **p < 0.01 and ***p < 0.001. Differences between two groups of data were tested for statistical significance by t test and the calculated level of significance compared to the control condition is given by # p < 0.05, ## p < 0.01 and ### p < 0.001. p > 0.05 was considered as not significant. | Results
Culture Age Dependency of the Cellular Contents of Adenosine Phosphates and CrP in Cultured Astrocytes
To determine the basal contents of adenosine phosphates and CrP in cultured astrocytes and to investigate a potential dependency of these contents on the age of the culture, specific values that had been determined in a total of 22 experiments (performed on 16 independently prepared cultures) were analysed (Table 1 ). The average specific contents of ATP, ADP and AMP were found to be 36.0 ± 6.4 nmol/mg, 2.9 ± 2.1 nmol/mg, and 1.7 ± 2.1 nmol/mg protein, respectively. The average sum of all three adenylates was 40.7 ± 6.6 nmol/mg and the average AEC accounted to 0.92 ± 0.04 (Table 1 ). The average specific CrP content of the cultures was found to be 25.9 ± 10.8 nmol/mg and the ratio of CrP to ATP in cultured astrocytes was calculated to be 0.74 ± 0.28 (Table 1 ).
With increasing culture age, the protein content per well (Fig. 1 a) increased slightly, while the specific contents of ATP (Fig. 1 b), ADP (Fig. 1 e) and AMP (Fig. 1 f) as well as the calculated AEC (Fig. 1 g) and the sum of adenosine phosphates (Fig. 1 h) remained almost constant. For individual experiments substantial differences to the average values (Table 1 ) were observed, but no obvious age-dependent alterations of the specific contents of the three adenosine phosphates were found (Fig. 1 ). In contrast, the specific CrP content (Fig. 1 c) as well as the ratio of CrP to ATP (Fig. 1 d) declined with increasing culture age, explaining the substantial SDs obtained for the average values determined for CrP content and for the CrP/ATP ratio (Table 1 ).
Consequences of Creatine Supplementation on the Contents of CrP and Adenosine Phosphates in Cultured Primary Astrocytes
To test whether creatine supplementation may affect the cellular levels of CrP and ATP, the specific contents of ATP and CrP in cultured astrocytes that had been incubated for 24 h in culture medium without or with 1 mM creatine were analyzed. These incubations did not lead to any obvious toxicity nor to an alteration in the morphology of the cells in the confluent astrocyte cultures (data not shown). While the average cellular ATP content was not affected by a preincubation with creatine (Table 1 , Fig. 2 a, b), the average specific CrP level in creatine-exposed astrocytes as well as the CrP/ATP ratio were doubled by the creatine treatment (Table 1 ). An increase in the specific CrP content of creatine-fed astrocytes by around 25 nmol/mg compared to respective control cells (incubation without creatine) was observed (Table 1 ) which was irrespective of the culture age (Fig. 2 a, b) and caused a substantially increased CrP/ATP ratio in both young and older astrocyte cultures (Fig. 2 c, d).
To investigate the importance of FCS and creatine for the maintenance of astrocytic CrP and ATP levels, 14 d-old and 27 d-old cultures were incubated for 24 h in DMEM with or without 10% FCS and/or 1 mM creatine. Absence of FCS significantly lowered cellular ATP and CrP contents in young (Fig. 3 a, e) and older cultures (Fig. 3 b, f) and also lowered to some extend the CrP/ATP ratios (Fig. 3 c, d), demonstrating that presence of serum is required to maintain the normal high contents of ATP and CrP in cultured astrocytes. Supplementation of DMEM with creatine prevented the decline in cellular CrP levels that was caused by serum-deprivation, but not the decline in cellular ATP contents (Fig. 3 a, b). Finally, the presence of creatine in serum-containing medium strongly increased the specific CrP contents (Fig. 3 a, b) as well as the CrP/ATP ratios (Fig. 3 c, d) in both young and older astrocytes cultures, compared to an incubation in serum-containing DMEM, but did not affect ATP contents (Fig. 3 a, b). None of the conditions applied caused any significant alteration in the cellular levels of ADP or AMP (Fig. 3 e, f) nor in the AEC (Fig. 3 g, h), compared to the control condition (presence of 10% FCS).
Modulation of CrP and ATP Levels in Astrocytes by Glucose Deprivation, 2DG Application and/or Inhibition of the Respiratory Chain
ATP is regenerated in astrocytes mainly by glycolysis and mitochondrial respiration [ 15 ]. To test how the absence or the presence of glucose and/or the inactivation of glycolysis and/or mitochondrial respiration may affect cellular ATP and CrP contents, astrocyte cultures were incubated for up to 30 min with or without glucose or with the glycolysis inhibitor 2DG [ 50 , 51 ] in the absence or the presence of 10 μM of the respiratory chain inhibitor antimycin A [ 52 ]. None of the conditions used compromised the cell viability as demonstrated by the absence of any significant increase in extracellular LDH activity (Table 2 ).
During incubations of astrocytes with or without glucose in the absence of antimycin A neither the specific ATP content (Fig. 4 a, b) nor the specific CrP content (Fig. 4 d, e) were significantly altered compared to the cellular contents at the onset of the incubation. In contrast, incubations in the presence of antimycin A strongly affected cellular ATP and CrP levels (Fig. 4 ). In glucose-deprived astrocytes, an incubation with antimycin A lowered cellular ATP levels within 10 min and 30 min to 33% and 4% of the initial content (Fig. 4 a). The decline in cellular CrP was even more rapid for this condition (Fig. 4 d) and preceded that of the decline in cellular ATP content (Fig. 4 g). Already after 5 min of incubation CrP levels were found to be lowered by around 90% (Fig. 4 d, g).
For glucose-fed astrocytes the presence of antimycin A lowered cellular ATP content only to a small extent (Fig. 4 b), while a substantial loss of around 80% of the initial CrP content was observed during a 30 min incubation of glucose-fed astrocytes with antimycin A (Fig. 4 e, h).
The presence of 2DG reduced the cellular content of ATP slowly by around 50% within 30 min (Fig. 4 c), while the additional application of antimycin A severely accelerated the decline in cellular ATP in 2DG-treated astrocytes (Fig. 4 c) compared to cells that had been incubated with antimycin A in the absence of 2DG (Fig. 4 a). After exposure to 2DG the cellular CrP content was lowered within 10 min to around 34% of the initial value and did not further decline during longer incubations (Fig. 4 f). For astrocytes that had been treated with 2DG plus antimycin A, already after 5 min of incubation CrP was hardly detectable (Fig. 4 f). The decline in cellular CrP content in 2DG-treated astrocytes preceded that of the cellular ATP content for incubations without and with antimycin A (Fig. 4 i).
Modulation of the Contents of Adenosine Phosphates in Astrocytes by Glucose Deprivation, 2DG Application and/or Inhibition of the Respiratory Chain
To test how the absence or the presence of glucose, the inactivation of glycolysis and/or the inhibition of mitochondrial respiration may affect cellular levels of ATP, ADP and AMP, cultured astrocytes were incubated for up to 30 min without or with glucose or 2DG in the absence or the presence of 10 μM antimycin A (Fig. 5 ). Incubations in the absence of antimycin A, irrespective of the absence or the presence of glucose, did not alter significantly the specific contents of ATP (Fig. 5 a, b), ADP (Fig. 5 d, e), AMP (Fig. 5 g, h) nor the sum of adenosine phosphates (Fig. 5 j, k) or the AEC (Fig. 5 m, n). In contrast, for glucose-deprived astrocytes that had been exposed to antimycin A a rapid loss in cellular ATP content (Figs. 4 a, 5 a), a small transient increase in the cellular ADP content (Fig. 5 d), a significant increase in the cellular AMP level (Fig. 5 g) and a strong decline in the sum of the three adenosine phosphates (Fig. 5 j) and in AEC (Fig. 5 m) was observed.
For cells that had been exposed to 2DG (in the absence of antimycin A), a decline in the cellular level of ATP (Fig. 5 c) and in the sum of the three adenylates (Fig. 5 l) was observed, while the specific contents of ADP (Fig. 5 f) and AMP (Fig. 5 i) remained low and the AEC was not altered and maintained high (Fig. 5 o). In contrast, a coincubation of astrocytes with 2DG plus antimycin A caused a rapid loss in cellular ATP content (Fig. 5 c), a rapid transient increase in the cellular ADP content (Fig. 5 f), a robust significant increase in the cellular AMP level (Fig. 5 i) as well as a strong decline in the sum of adenosine phosphates (Fig. 5 l) and in the AEC (Fig. 5 o). | Discussion
To investigate the interplay of ATP and CrP metabolism of astrocytes, we have used confluent astrocyte cultures of an age between 14 and 28 days. The average specific content of ATP in untreated astrocyte cultures was with 36.0 ± 6.4 nmol/mg in the range between 20 and 40 nmol/mg that has been reported previously for astrocyte cultures by several groups [ 15 , 19 , 21 , 27 , 46 , 50 ]. The specific cellular ADP and AMP contents of untreated cultured astrocytes were very low compared to the ATP content, consistent with literature data for these adenosine phosphates [ 23 – 27 ] and with the reported high AEC of around 0.9 in astrocytes [ 24 , 26 , 27 ]. The average specific CrP content of cultured astrocytes was with 25.9 ± 10.8 nmol/mg similar to that of ATP as reported previously by other groups [ 38 – 41 ].
Comparison of cellular CrP and adenylate contents of astrocyte cultures of different ages revealed that the specific cellular CrP level declined with increasing culture age, while the specific contents of ATP, ADP and AMP as well as the AEC did not show any obvious age-dependent alterations. Some age-dependent declines in creatine [ 53 ] or CrP [ 53 , 54 ] contents have previously been reported for muscle and brain tissue [ 55 ]. For cultured astrocytes, impaired cellular creatine synthesis [ 35 , 36 ], increased creatinine formation [ 56 ] and/or impaired creatine uptake [ 37 ] may contribute to the observed age-dependent decline in the specific CrP content. However, as the application of creatine strongly increased the cellular CrP level by almost the same extent in young and older cultures, it can be assumed that an age-dependent impairment of cellular creatine uptake is unlikely to contribute to the observed decline in cellular CrP content with culture age.
Serum-deprivation lowered both ATP and CrP levels in cultured astrocytes, suggesting that serum contains compounds which support the maintenance of high cellular levels of both compounds. A loss in cellular ATP and CrP of astrocytes under such conditions is not unexpected as some cellular CrP is likely to disappear by spontaneous formation of creatinine [ 57 ] and as ATP can be exported from astrocytes [ 58 , 59 ]. Serum components may be used by astrocytes to compensate for these losses. Indeed, serum contains creatine in concentrations of up to 500 μM [ 60 , 61 ] which could serve as substrate for the active creatine uptake into astrocytes [ 37 ]. Similarly, ATP is present in serum [ 62 ] and products of the extracellular hydrolysis by astrocytes of this ATP [ 63 ] are likely to be taken up into the cells and serve as substrates for ATP synthesis via the purine salvage pathway [ 24 , 64 , 65 ].
Astrocytes are known to efficiently metabolize glucose and to produce ATP via glycolysis [ 66 ], but glucose-depletion did not affect the initial high cellular ATP content. This was expected for the short incubation periods used in our study (up to 30 min) as intracellular energy stores such as glycogen and fatty acids enable astrocytes to maintain a high ATP level in the absence of exogenous glucose for many hours by mitochondrial oxidative phosphorylation [ 15 ]. Also, the inhibition of mitochondrial respiration in the presence of glucose hardly affected cellular ATP levels, suggesting that upregulated glycolysis can at least for a short incubation period of up to 30 min compensate for an impairment in mitochondrial ATP regeneration. However, severe effects on the cellular ATP levels were observed, if both glycolysis and mitochondrial ATP regeneration had been compromised in cultured astrocytes, consistent with literature data [ 15 , 18 – 22 ]. For such conditions a transient increase in the cellular ADP content was determined after 5 min incubation and a slower but stronger increase in the cellular AMP level during 30 min of incubation. The observed time-dependent changes in the cellular levels of adenosine phosphates are likely to be caused by the action of adenylate kinase that has been reported to be present in cultured astrocytes [ 34 ]. This enzyme transfers the terminal phosphate group of one ADP molecule to a second ADP molecule [ 67 ], thereby regenerating ATP and producing AMP. However, the sum of all three adenosine phosphates was found dramatically lowered in glucose-deprived and antimycin A-treated astrocytes, suggesting that a strong cellular accumulation of AMP which may strongly activate AMP-mediated signaling is partially prevented by subsequent AMP metabolism, for example by deamination of AMP to inosine monophosphate or by dephosphorylation of AMP to adenosine [ 68 ]. Further studies are now required to elucidate whether starved astrocytes indeed release AMP-derived metabolites such as adenosine and/or inosine monophosphate.
Astrocytic ATP content was slowly lowered after application of 2DG within 30 min to around 50% of the initial content, with the specific contents of ADP and AMP declining accordingly. This allowed the cells to maintain a high cellular AEC, despite a substantial loss in the sum of cellular adenosine phosphates. In contrast, inactivation of mitochondrial respiration in glucose-deprived and/or 2DG-treated astrocytes caused a rapid depletion of cellular ATP within minutes and the cells were unable to maintain a high AEC.
For all the conditions investigated in our study which caused a decline in cellular ATP contents we also observed a rapid loss in cellular CrP content that always preceded that of ATP. This observation supports the view that cultured astrocytes use their high CrP content as temporary buffer of high energy phosphate groups that is immediately used to regenerate ATP from accumulating ADP during periods of insufficient ATP regeneration by glycolysis or mitochondrial metabolism. The high specific activity of CrK (around 3 U/mg) that has been reported for cultured astrocytes [ 34 ] will enable these cells to rapidly regenerate ATP by phosphorylating ADP on the expense of their cellular CrP content, consistent with the view that CrK-mediated ATP regeneration is at least one order of magnitude faster than ATP regeneration by glycolysis and oxidative phosphorylation [ 69 ].
In the presence of glucose, an application of antimycin A for up to 30 min hardly affected the cellular levels of adenosine phosphates while the cellular content of CrP was strongly lowered by such a treatment. This observation suggests that under conditions of limited ATP regeneration by impaired mitochondrial respiration and accelerated glycolysis [ 15 , 17 ] a high cytosolic ATP content and a high AEC are only maintained on the expense of the cellular CrP content. Similar results have been reported for an exposure of astrocytes to ammonia or octanoate [ 39 ] or to glutamate [ 42 ] which also resulted in lower cellular CrP levels while cellular ATP contents were maintained high.
In conclusion, a rapid decline of cellular CrP always preceded the decline in ATP content in cultured astrocytes under conditions that compromised cellular ATP regeneration by compromising glycolysis and/or mitochondrial oxidative phosphorylation. These data demonstrate the importance of cellular CrP as temporary and rapidly mobilizable energy buffer for maintaining a high cellular ATP content in astrocytes, especially during episodes of impaired mitochondrial ATP production. A supplementation with creatine doubled the cellular contents of CrP and the CrP/ATP ratio in astrocytes which may improve the ability of these cells to temporarily deal with situations of compromised metabolic ATP regeneration. Further studies are now required to elucidate beneficial consequences of a treatment of astrocytes with creatine and whether such processes may contribute to the wide range of reported health and therapeutical beneficial effects of a creatine supplementation [ 70 – 74 ]. | Adenosine triphosphate (ATP) is the main energy currency of all cells, while creatine phosphate (CrP) is considered as a buffer of high energy-bond phosphate that facilitates rapid regeneration of ATP from adenosine diphosphate (ADP). Astrocyte-rich primary cultures contain ATP, ADP and adenosine monophosphate (AMP) in average specific contents of 36.0 ± 6.4 nmol/mg, 2.9 ± 2.1 nmol/mg and 1.7 ± 2.1 nmol/mg, respectively, which establish an adenylate energy charge of 0.92 ± 0.04. The average specific cellular CrP level was found to be 25.9 ± 10.8 nmol/mg and the CrP/ATP ratio was 0.74 ± 0.28. The specific cellular CrP content, but not the ATP content, declined with the age of the culture. Absence of fetal calf serum for 24 h caused a partial loss in the cellular contents of both CrP and ATP, while application of creatine for 24 h doubled the cellular CrP content and the CrP/ATP ratio, but did not affect ATP levels. In glucose-deprived astrocytes, the high cellular ATP and CrP contents were rapidly depleted within minutes after application of the glycolysis inhibitor 2-deoxyglucose and the respiratory chain inhibitor antimycin A. For those conditions, the decline in CrP levels always preceded that of ATP contents. In contrast, incubation of glucose-fed astrocytes for up to 30 min with antimycin A had little effect on the high cellular ATP content, while the CrP level was significantly lowered. These data demonstrate the importance of cellular CrP for maintaining a high cellular ATP content in astrocytes during episodes of impaired ATP regeneration.
Keywords
Open Access funding enabled and organized by Projekt DEAL. | Acknowledgements
The authors would like to acknowledge the basal financial support of the University of Bremen for the project presented here.
Author Contributions
GK and RD designed the study concept. GK performed all experiments, analysed the data obtained and prepared the figures and the tables. JB contributed to the establishment of the assays for ADP and AMP. RD wrote the first draft of the manuscript. All authors reviewed and approved the final manuscript.
Funding
Open Access funding enabled and organized by Projekt DEAL. The authors have not disclosed any external funding.
Data Availability
Enquiries about data availability should be directed to the authors.
Declarations
Conflict of Interest
The authors have no conflict of interest to declare. | CC BY | no | 2024-01-15 23:41:53 | Neurochem Res. 2024 Oct 19; 49(2):402-414 | oa_package/b5/9d/PMC10787699.tar.gz |
||
PMC10787700 | 0 | Introduction
Anthropogenic climatic warming is driving increases in the volume and extent of oxygen minimum zones (OMZs), as well as increases in sea surface temperatures (SST), the combination of which is likely to threaten the functioning of marine ecosystems (Diaz and Rosenberg 2008 ; Stramma et al. 2008b ; Keeling et al. 2010 ; Gruber 2011 ; Breitburg et al. 2018 ). Recovery of dissolved oxygen (DO) levels are predicted to be so long that the changes occurring at present are considered to be effectively irreversible in human timescales (Gruber 2011 ). Open-ocean OMZs have increased in extent over the last 60 years, with a quadrupling in the volume of anoxic water (Schmidtko et al. 2017 ; Breitburg et al. 2018 ). The northern and equatorial Pacific Ocean has seen the largest reductions in DO over the last 50 years, with approximately 40% of the global loss of oxygen occurring in this region (Schmidtko et al. 2017 ). Higher sea temperatures have increased stratification and reduced oxygen solubility which, with the upwelling of low DO water, have resulted in these increases (Levin 2018 ). Increased primary productivity in surface layers has raised the quantity of organic matter available for sub-surface microbial respiration, which further depletes mesopelagic DO concentrations (Diaz and Rosenberg 2008 ; Breitburg et al. 2018 ). Consequently, in the eastern equatorial regions of the Atlantic and Pacific Oceans, there are extensive OMZs that exist within the depth range of 100–900 m that overlap both horizontally and vertically with marine predator hotspots (Karstensen et al. 2008 ; Czeschel et al. 2012 ; Queiroz et al. 2016 , 2019 ; Olivar et al. 2017 ; Vedor et al. 2021b ). These OMZs are likely to increase in volume further, with reductions in global dissolved oxygen of up to 7% predicted by 2100 (Schmidtko et al. 2017 ) with potentially serious consequences for marine life (Stramma et al. 2008b ). Hypoxia has been shown to have far-reaching effects not only on a wide range of taxa, but also to be consistently detrimental to almost all biological processes, such as survival, growth, development, and reproduction (Sampaio et al. 2021 ). While mitigation of these problems through the reduction in CO 2 emissions is therefore essential, it is also vital to gain an understanding of how expanding OMZs will impact marine ecosystems if measures are to be implemented to avoid or reduce these effects.
Two processes combine to reduce the volume of suitable habitat above OMZs. First, as the OMZ expands, it forces deoxygenated water into shallower layers of the water column, thus reducing the volume from below. Simultaneously, rising sea surface temperatures reduce the volume of preferred habitat from above. This compression reduces the vertical extent of habitable water column for fishes and other organisms above the mesopelagic OMZ (Prince and Goodyear 2006 ; Prince et al. 2010 ; Stramma et al. 2010 , 2011 ; Vedor et al. 2021b ). Consequently, as the depth at which organisms experience hypoxia shoals, those with higher oxygen demands will be forced into a narrowing volume of cooler, better oxygenated water. Fishes may become habitat compressed and therefore become more vulnerable to fishing effort from surface longlines for example (Vedor et al. 2021b ). Alternatively, fish might be displaced horizontally to areas outside the volume occupied by the OMZ if the hypoxic tolerance of prey species, such as euphausiids, myctophids, or squid (Trubenbach et al. 2013 ; Seibel et al. 2016 ; Olivar et al. 2017 ), is greater than that of the predators (e.g., tunas or sharks) which may affect foraging opportunities (Vetter et al. 2008 ; Breitburg et al. 2018 ).
Highly active, water-breathing marine predators, such as tunas, billfish, and sharks with high oxygen requirements, are likely to be most affected by changes in the distribution of low DO in the oceans (Brill 2007 ; Gilly et al. 2013 ). Consequently, bigeye tuna ( Thunnus obesus ; hereafter BET) and yellowfin tuna ( Thunnus albacares ; hereafter YFT) are appropriate species to test for the effect of low DO as they have relatively high metabolic rates among fish with consequently high O 2 demands and are therefore likely to be more sensitive to low DO (Bernal et al. 2017 ). Istiophorid billfish, for example, are known to exhibit very different vertical movements with respect to DO, foraging in shallower water in the eastern tropical Pacific which is characterised by low DO at depth, and foraging in deeper waters in the western tropical Atlantic where DO is higher (Prince and Goodyear 2006 ). Furthermore, both tuna species are commercially important, together making up about 35% of the total global tuna catch (ISSF 2020 ). Therefore, understanding the importance of expanding OMZs to these species has implications not only for marine ecology but also for tuna fisheries management and food security (Baez et al. 2018 ).
The BET and YFT studied here exhibit vertical movements that result in different distributions of time-at-depth. Both species typically perform normal diel vertical migration (nDVM), moving to deeper water to forage during the day, with BET diving to around 200–300 m and YFT to around 50–150 m (Schaefer et al. 2009 ; Schaefer and Fuller 2010 ). Consequently, both species are likely to encounter low DO water which might act as a physiological or behavioural boundary to their vertical movements and foraging opportunities. There are considerable differences between BET and YFT, not only in the vertical habitat occupied, but in their physiological adaptations to both temperature and oxygen (Bernal et al. 2017 ). Given that BET spend the majority of the daytime in low DO waters when foraging at depth (Leung et al. 2019 ), well below the commonly accepted hypoxic threshold of 63 μmol/l (Breitburg et al. 2018 ), it is expected that BET would have adaptations specific to low DO and lower temperatures at depth (Bernal et al. 2017 ). Indeed, BET have better control over the thermoregulation of red muscle tissue by being able to selectively route blood through two vascular heat exchangers, varying the extent to which metabolic heat is retained in swimming muscles (Bernal et al. 2017 ). The heart muscle of BET has also been shown to have higher tolerance of low temperatures than that of YFT (Bernal et al. 2017 ). These adaptations provide greater tolerance to low temperatures, allowing BET to forage during the day well below the thermocline in water as cold as 7 °C. By contrast, YFT are generally restricted to warmer waters above the thermocline (Bernal et al. 2010 ). Furthermore, BET have been shown to have a higher blood–oxygen-binding affinity than YFT or skipjack tunas ( Katsuwonus pelamis ) (Lowe et al. 2000 ; Bernal et al. 2017 ). While studies using archival tag data confirm that YFT spend less time in hypoxic waters, they do not show the expected behavioural responses to low DO (such as increased swimming speed to improve ventilation) until DO reaches levels as low as 75 μmol/l and are able to survive hypoxic conditions for over 3 h (Bernal et al. 2017 ). These findings suggest that responses to low DO by BET and YFT are variable and depend on thermal habitat or prey distributions as well as hypoxia for determining times spent above and below the thermocline. For example, it is known that BET and YFT favour different forage taxa, with BET consuming more fish and squid and YFT having a greater preference for crustaceans (Menard et al. 2006 ) and this differing prey preference could also contribute to the observed differences in water column occupancy.
Many studies have identified distinct vertical habitats for BET and YFT, and laboratory studies have revealed tolerances for temperature and DO (for reviews see Bernal et al. 2010 ; Leung et al. 2019 ). However, it is poorly understood to what extent actual DO levels in the open ocean may affect the vertical movements and behaviour of populations of BET or YFT, such that a representative overview of responses may be obtained. Although it is expected that low DO will affect the time spent at depth by each species, a comprehensive analysis of vertical movements in response to DO will help inform the effects expanding OMZs will have on these tunas and how this may later affect foraging opportunities and the vulnerability to surface fisheries.
In this study, we investigate the extent to which dissolved oxygen levels at a range of depths may constrain the vertical space use of bigeye and yellowfin tuna. We do so by identifying changes in the occupancy of the water column from tuna electronically tagged in throughout the eastern Pacific Ocean (EPO). Clearly, experimental modification of DO profiles in the ocean is not possible, but we are able to analyse vertical movements and behavioural responses of the tuna in terms of the extent of the water column used in contrasting areas of high and low DO at depth. We hypothesise that YFT will respond more markedly to low DO than BET and that occupancy of the water column by YFT, but not BET, will be shallower where DO at depth is lower. To test this, our approach was to first determine threshold depths at which vertical occupancy changed in relation to low DO at foraging depths. Having identified thresholds for each species, we then used these to select areas (1-degree grid cells) where vertical occupancy of the water column differed and analyse DO and temperature at a range of depths to determine the likely drivers for the differing occupancy.
Additionally, BET are known to perform periodic vertical ascents to shallower waters while foraging at depth. These ascents (upward vertical excursions) are thought to allow rewarming after time spent in deeper, colder water (Schaefer et al. 2009 ; Schaefer and Fuller 2010 ). However, we also hypothesise here that these ascents into water with higher DO concentrations may enable faster physiological recovery from time spent in low DO waters. Therefore, we analysed vertical excursions in relation to DO at foraging depths, to test the hypothesis that the number of vertical ascents will be greater where DO at depth was lower. Finally, both species have been observed in numerous studies to perform occasional very deep dives (e.g., Schaefer et al. 2007 ; Schaefer and Fuller 2010 ; Fuller et al. 2015 ). Typically, these take the form of bounce dives (where little time is spent at depth) and reach depths of over 1800 m for BET and 1600 m for YFT. The purpose of these dives in tuna and many other species, for example blue sharks Prionace glauca and whale sharks Rhincodon typus (e.g., Brunnschweiler et al. 2009 ; Queiroz et al. 2012 ) is yet to be determined. We therefore analysed the occurrence of these dives in relation to DO at depth, to investigate whether DO concentrations at depth drive or inhibit these events. | Methods
Summary
This study used high-resolution depth time-series data from 92 BET and 175 YFT tagged in the Eastern Pacific Ocean (EPO) in 2000 and between 2003 and 2005 (BET) and 2002 and 2011 (YFT). Light-based geolocation estimates were modelled with the unscented Kalman filter (uKFSST) which incorporates remotely sensed SST fields to derive most probable daily locations (Lam et al. 2010 ). Dissolved oxygen (DO) throughout the water column within 1-degree grid cells was determined from modelled DO datasets (Copernicus Marine Services, CMEMS, https://www.copernicus.eu ). From time-at-depth (TAD) profiles, time activity profiles, and prior research (Schaefer and Fuller 2002 , 2010 ; Musyl et al. 2003 ; Schaefer et al. 2007 ; Matsumoto et al. 2013 ), it was clear that both species are active at depth during daylight hours and, therefore, it is over these times that the interaction with DO was probably occurring. Consequently, nighttime activity was excluded from the analysis, and from the daytime time-at-depth profiles, the depths at which most time was spent were determined for both species as 300 m for BET and 100 m for YFT.
Depth time-series data and light level geolocation positions were merged for 92 BET and 175 YFT tracks, to produce a 3D track for each individual with the resulting data being loaded into an SQL Server data base for subsequent analysis. To first identify the overall response (in terms of the TAD profiles) of the fish to different levels of DO at depth, a 1-degree grid was defined over the study area, and in each occupied grid cell, the mean DO at the putative foraging depths of 300 m for BET and 100 m for YFT was determined. Using these DO levels, the grid cells were separated into the upper and lower 10th quantiles, to represent two extremes of DO concentration potentially encountered by tagged tunas. By comparing time-at-depth plots produced from within these two sets of grid cells, differences in vertical habitat use between high and low DO areas were determined. In both species, depth thresholds were identified where occupancy of specific depths differed in low DO areas and, thus, a behavioural response to low DO was detected. Because the thresholds at which a behavioural response was identified differed markedly from the depth at which DO concentrations were used to separate grid cells, we then used the response threshold depths to further investigate the response to DO and temperature at a range of depths. To do so, we identified areas (sets of 1-degree grid cells) where the tuna spent more time above or below these response thresholds, separating them again into the upper and lower 10th percentiles. We then analysed depth time-series locations within these areas to compute correlation coefficients between the time spent below the response threshold depths and DO concentrations and temperature. By doing so, we were able to relate occupancy of the water column with DO and temperature at a range of depths. These relationships were further investigated using Generalised Additive Models (GAMs). For BET, we also analysed the number of vertical excursions from depth to surface waters the tuna performed in relation to DO and temperature, to test the hypothesis that DO, as well as temperature, was a driver for these characteristic movements. Finally, we investigated the occurrence of exceptional deep dives in reference to high and low DO grid cells.
Tagging
Full details regarding the materials and methods utilised in the capture, tagging, and release of the fish are given by Schaefer and Fuller et al. ( 2002 , 2007 , 2009 , 2010 ). BET tuna were captured, tagged, and released while associated with both drifting fish aggregating devices (dFADs) and moored oceanographic buoys in the equatorial EPO between 02°12 S and 2°00′N and between 94°42′ and 95°29′W, during 15–22 April 2000, March–May 2003, 2004, and 2005. Tagging was conducted on the chartered FV Her Grace , a 17.7-m, 99 gross-t, United States west-coast-style live-bait pole-and-line vessel. YFT were captured, tagged, and released along the coast of Baja California, Mexico between 23°18 N and 31°45′N and between 110°20′ and 118°24′W during November 2002–November 2008, and areas surrounding the Revillagigedo Islands, Mexico between 18°19 N and 19°20′N and between 110°54′ and 114°45′W during February 2006–May 2011. Tagging was conducted aboard the US flagged passenger carrying fishing vessels FV Royal Star and FV Shogun , both home-ported in San Diego, California. The archival tags used were model Mk7 and MK9 manufactured by Wildlife Computers (Redmond, WA) (Wildlife Computers, 2002), LTD_2310 and LTD_2350 manufactured by Lotek Wireless, Inc., St. John’s, Newfoundland, Canada (Schaefer and Fuller 2016 ). The total weight of the tags in air is about 32–40 g. Tags were designed for implantation into the peritoneal cavity of the fish, so that the sensor stalk protrudes outside the fish through an incision in the abdominal wall. A label, printed in Spanish, with information about reporting the recovery of the tag and the associated reward (US$250) was encased in the epoxy of the main body of the instrument.
Preparation of data
For BET, 92 geo-located tracks with daily position locations were available for analysis; the light level geolocation and modelling process is described fully in Schaefer and Fuller ( 2009 ). For YFT, 175 geo-located tracks were available giving a total of 267 tracks in this study. Details of these tracks are given in the Supplementary information, Tables S1 and S2.
Dive time-series were recorded with sampling intervals ranged from 4 s to 4 min, so to standardise the data, all tracks were interpolated to 4-min intervals. This standardisation was verified in a sensitivity test (Supplementary and Figure S1). The daily position locations and the dive time-series data were then merged by interpolating the daily position locations linearly at 4-min intervals, to match the times in the depth time-series. The time of each location (originally recorded in UTC) was then converted to local time using the estimated longitude, with 15 degrees of longitude corresponding to a difference of 1 h. To remove from the analysis those days where the fish were known to be associating with floating Fish Aggregation Devices (FADs) and also to remove any post-tagging behavioural anomalies, we removed the first 14 days of data from every track. Prior analysis of the BET tracking data reveals days throughout the tracks where the tuna exhibit what is referred to as ‘surface-oriented’ behaviour and it is possible that on those days, the fish were associated with FADs. However, as these behaviours occurred when the fish were beyond observation, FAD association could not be objectively confirmed as the sole cause of the occupancy of shallower waters and, therefore, it was concluded that these days should not be removed from the analysis. To confirm that this choice did not unduly affect the work, a sensitivity analysis was performed (see Supplementary Analysis and Figures S2, S3, S4 and S5) that showed no significant differences. Consequently, the more conservative route of retaining these days was taken.
Following the merge of the position and depth data, additional environmental data were collated, so that each datum comprised track name (ID), date, latitude, longitude, depth, temperature, DO, bathymetric depth, and mixed layer depth. All data were written to an SQL Server database to allow selection of data for later analysis. Temperature data were recorded by the tag along with depth; DO and mixed layer depth were obtained from 3D statistical models from Copernicus Marine Services (CMEMS, https://marine.copernicus.eu/ ); bathymetry was obtained from the Gebco 30 s product (GEBCO_2014 Grid, version 20,150,318, www.gebco.net ). All the analysis performed in this study began by selecting the required sub-set of data from the database (e.g., all locations from selected 1-degree grid cells) into CSV files that were imported into Excel (Microsoft Corporation) or SigmaPlot (Systat Software, San Jose, CA) for further analysis.
Determining changes in vertical habitat use in response to low DO
For this preliminary analysis, daytime Time at Depth (TAD) plots (i.e., between the local times of 06:00 and 17:00), together with example dive time-series plots showing depth, temperature, and DO, were used as a guide to the depth at which DO was likely to be important for each species. It was hypothesised that the lower bound of the daytime depth where DO will be lower is more likely to represent a possible boundary at which the DO concentration could influence behaviour; consequently, 300 m was selected as a depth towards the lower edge of the vertical activity range of BET, while, for YFT, a value of 100 m was selected. DO concentrations at these depths (from the mean 2005 dataset) were then determined for all occupied 1-degree grid cells, and using these values, grid cells in the 10th and 90th percentiles were selected to represent the two extremes of modelled DO ‘encountered’. For BET, there were 336 occupied grid cells, from which we selected 33 locations for the upper and lower percentiles; for YFT, there were 443, giving 44 locations for each. The geographic ranges of these cells were used to select locations from the depth time-series data and generate time-at-depth plots corresponding to high and low DO locations. From these TAD plots, we identified depth thresholds for BET and YFT at which there were significant differences in occupancy of shallower waters between the low and high DO locations.
Determining the response to DO and temperature from behavioural depth thresholds
The behavioural depth thresholds (see previous section) were then used to identify areas where DO at a range of depths (i.e., at depths other than the specified depth used previously) might be affecting behaviour. We therefore analysed all occupied 1-degree grid cells in the study area by comparing DO concentrations and temperatures at a range of depths between locations where most time was spent above or below the behavioural depth thresholds for each species. To do so, all daytime depth records (i.e., between 06:00 and 17:00 local time) were extracted for each occupied grid cell and the times above and below the threshold depths previously identified were calculated. Using these proportions of time, the grid cells in the upper and lower 10th percentiles were selected. From these grid cells, differences between the upper and lower 10th percentiles of grid cells were computed for DO and temperature at depths of 50, 100, 150, 200, 250, and 300 m together with the median and maximum daily dive depths and the bathymetric depths. To test the hypothesis that DO at depth was a driver for the observed change in the distribution of time-at-depth, correlation coefficients (Pearson’s r ) were computed between the proportion of time spent below the threshold and the DO concentrations, temperatures, mixed layer depth, median depth, and bathymetric depth for all grid cells. As the response to DO might not be linear, we also developed Generalised Additive Models (GAMs) to investigate the relationships between time spent below the depth threshold and DO and temperature at a range of depths as described in more detail below.
Analysis of BET vertical excursions
Vertical excursions in BET, from depth, to warmer surface waters are well known and have been observed in many studies. Vertical excursions have been hypothesised to allow the animal to rewarm after spending usually around an hour in colder deep water (Schaefer and Fuller 2002 ; Musyl et al. 2003 ). To investigate the extent to which low DO at depth might be a driver for vertical excursions, the number of upward vertical excursions performed between the local times of 06:00 and 17:00 in the one-degree cells with the lowest and highest 10th percentiles of DO were compared. The analysis was extended to compare the number of vertical excursions between the 10th–90th, 20–80, 30–70, and 40–60 percentiles, to identify the level to which DO might be a driver for vertical excursions.
Analysis of the occurrence of exceptional deep dives
To investigate whether the frequency and depth of exceptional deep dives was influenced by low DO, we compared deep dives that occurred in the 10th and 90th percentile grid cells for DO at 300 or 100 m for BET and YFT, respectively. We selected dives deeper than 500 m for BET and deeper than 250 m for YFT.
GAM analysis
A range of Generalised Additive Models (GAMs) were developed, using the R mgcv package (Wood 2011 ), to investigate non-linear relationships in more detail. Response variables (e.g., time-at-depth, median depth, and maximum depth) were computed from all locations within each occupied 1-degree grid square. Because the chosen response variables were not necessarily normally distributed, the first step determined the appropriate distribution for each model (e.g., Gaussian, Log Gaussian, and Gamma). Second, the environmental variables being considered exhibit strong collinearity, or more properly for GAMs, concurvity (Gu et al. 2010 ), with, for example, DO at 300 m being strongly related to DO at 250 m. To reduce this problem in the models, a preliminary analysis was performed to determine the most significant depth for DO and temperature for each response variable, with subsequent compound models using these single values (e.g., Temperature at 50 m and DO at 200 m). While SST likely does exhibit concurvity with shallow temperatures and DO concentrations, SST has been used as an indicator of water column conditions (Vedor et al. 2021a ) and was therefore included. For all response variables a number of uni- and multi-variate models were developed.
To complement the large-scale spatial analysis described above, GAMs were used to analyse responses to DO and temperature from individual time-series data. Daytime (local time 06:00 to 17:00) locations were selected from the time-series data to provide date, depth, latitude, and longitude which were used to provide, for each day and individual, the maximum daily dive depth, average dive depth and, for BET, a count of the number of vertical excursions performed. Because the occasional very deep dives performed by both species were considered to be outside of normal vertical occupancy, dive locations with depths > 500 m were excluded from the analysis. This could not be done for the spatial analysis, as nearly all grid cells had maximum depths > 500 m. For each individual day, the location was used to derive DO and temperature at depths of 50, 100, 150, 200, 250, and 300 m as well as sea surface temperature (SST), to provide the environmental variables for the modelling. To allow for the error fields inherent in the light level geo-located positions, environmental variables were computed as a mean value from a 1-degree area centred on the location. The unique track identification number was included in preliminary models to assess the effect of individual variation, as was body (fork) length. Thus, the GAM analysis was performed using both spatial (large scale) and individual (detailed) data. | Results
Geographic distribution of tagging data
The most probable tracks by individuals of the two species occupied two distinct areas of the EPO. The tracks of both species extended over large areas and a broad range of depths, although it was evident that some YFT exhibited a more coastal distribution around Baja California (Fig. 1 ).
Behavioural thresholds for DO response analysis
We determined depths at which low DO might affect behaviour as being 300 m for BET and 100 m for YFT (Figs. 2 , 3 , 4 ). Using these values, we then selected grid cells where modelled DO at these depths was in the upper or lower 10th percentile (Figs. 5 and 6 ).
Changes in vertical habitat use in response to low DO
The resulting TAD plots identified discontinuities in the time-at-depth between the high DO and low DO areas (Figs. 7 and 8 ). For BET, 7.8% more time is spent above 20 m when DO is high ( p < 0.001, signed-rank test), 12.6% more is spent between 20 and 55 m when DO is low ( p < 0.001, signed-rank test), and 5.3% more time is spent below 55 m when DO is higher ( p < 0.001, signed-rank test). YFT spend 16.8% more time-at-depth below 43 m when DO is higher ( p = 0.012, signed-rank test) and 14% more time between 10 and 43 m when DO is low ( p < 0.001, signed-rank test).
Using these depth thresholds (55 and 43 m for BET and YFT, respectively), we then selected grid cells where occupancy of depths below the threshold was in the 10th and 90th percentiles (Figs. 9 and 10 ).
Summary of DO and temperature levels encountered
To examine the extent to which the tunas were exposed to low DO, the DO values for all daytime (06:00 to 17:00 local time) locations were used to produce time-at-DO histograms with DO binned at 5 μmol/l (Fig. 11 ). BET spent 49.2% of their time in low DO waters below the hypoxic threshold of 63 μmol/l, while YFT spent much less time in low DO waters, remaining above 200 μmol/l for 85.5% of their time. However, it is also clear that YFT do visit hypoxic waters and are on occasion exposed to DO levels as low as BET, but for a much shorter time, spending only 3.5% of their time-at-DO below 63 μmol/l. The amount of time each species spent at different water temperatures was also determined (Fig. 12 ). YFT spent 59.8% of their time in waters warmer than 20 °C, while BET spent 53.6% of their time in cooler waters below 15 °C, with both temperatures reflecting the times spent at different depths by the two species. Therefore, YFT spent more time in higher DO and higher temperatures, but it is not clear whether it is temperature or DO that restricted their daytime depth distribution.
Responses to DO and temperature
Here, we compare bathymetry, DO, and temperature at a range of depths between grid cells where the tunas were spending more or less time below the behavioural depth thresholds (55 m for BET, 43 m for YFT). For clarity, we refer to areas where more time is spent above the threshold as ‘shallow’ and where more time is spent below as ‘deep’. For BET we found that in the upper 10th percentile of grid cells (deep areas), 90% of their time was spent below 55 m, whereas in the upper 10th percentile of grid cells (shallow), 76% of time was spent above 55 m. The grid cells selected in these 10th percentiles therefore represent two extremes of occupancy of the water column, as confirmed by the median depths being 197 and 49 m in the deep and shallow areas, respectively.
Overall, we found that for BET DO was lower at depths of 50 and 300 m; however, these differences were not significant and correlations between time spent below 55 m and DO were weak and not significant at any depth (Figs. 13 , 14 , Supplementary Table S3,). In contrast, for YFT at all depths, DO was significantly higher and was positively and significantly correlated with time spent below 43 m.
Differences and correlations for both species with temperature were lower (Figs. 13 , 14 , Supplementary Table S4), particularly for BET where the greatest difference was at 50 m (− 5.25%) and none of the differences were significant. Correlations were also weak and only significant at depths below 150 m. For YFT, while differences in temperature were smaller than found with DO, temperatures were all higher in deep areas. Correlations between time spent below 43 m and temperature were all positive and significant up to 150 m, which represents the limit of most of the vertical occupancy.
We found significant differences in median and maximum daily depths between the deep and shallow areas for BET (Supplementary Table S5), with maximum depth in deep areas being 806 m compared to 289 m in shallow areas. Median depth in deep areas was also greater at 174 m compared to 42 m and correlations between time below 55 and median and maximum depths were also positive and significant. We also computed correlations between the median depth in each location and DO at depths from 50 to 300 m (Supplementary Table S6). For BET, there was no significant correlation at any depth; however, for YFT, there were similar and significant positive correlations between median depth and DO at all depths analysed, suggesting that median depth increases with increasing DO at depth.
BET vertical excursions
The analysis of time-at-depth in low DO grid cells showed that BET spent significantly more time above 55 m (20%, p < 0.001 Mann–Whitney Rank Sum Test). To test whether BET perform more upward vertical excursions and consequently spend more time in shallower, more oxygenated waters when DO at depth is low, we counted the number of vertical excursions in the upper and lower 10th percentile of grid cells for each individual. We found significantly more vertical excursions per individual in low DO areas (Table S7, Fig. 15 ; low DO median 41.25, high DO median 19.33, p = 0.005, Mann–Whitney Rank Sum test). There was no significant difference in temperature between high and low areas (low DO median temperature 19.61, high DO median temperature 20.91, p = 0.374). Extending the analysis, we also compared the number of vertical excursions between grid cells in the 80th v 20th, 70th v 30 th , and 60th v 40th percentile pairs. Figure 15 (and Supplementary Table S7) shows that there were significantly more vertical excursions up to the 60–40 division of grid cells, but no significant differences in temperature, confirming that it was not only at the extremes of DO concentrations where low DO was associated with an increased number of vertical excursions.
To test the hypothesis that body size (fork length) is negatively correlated with the number of vertical excursions, a Pearson product moment correlation was performed which showed a negative correlation with r 2 = − 0.503, slope = − 0.009. The associated scatter plot (Supplementary Figure S2), however, revealed two distinct clusters of points, where the BET tagged in 2000 appeared to be a different cohort to those tagged in years 2003–2005. A Mann–Whitney rank sum test comparing the length of the two groups confirmed this difference (Table S8, p < 0.001), and consequently, the two groups were analysed separately, revealing similar, positive correlation coefficients of r 2 = 0.351, slope = 0.003 and r 2 = 0.356, slope = 0.004 for BET 2003–2005 and BET 2000, respectively (Supplementary Figure S3, Table S8).
Occurrence of exceptional deep dives
For BET, we found more and significantly deeper deep dives in the lower 10th percentile of DO grid cells ( p < 0.001, Mann–Whitney Rank Sum Test, Supplementary Table S9). For YFT, however, while there were more deep dives in low DO grid cells, there were significantly deeper dives where DO was higher ( p = 0.015, Mann–Whitney Rank Sum Test). However, for YFT, the sample size was considerably smaller than with BET (only 47 dives in total, compared to 710 for BET). To test whether, where DO at 300 m was low, DO at deeper depths might be higher, we plotted mean DO at depths up to 2000 m in each low DO and high DO grid cell. However, this showed no difference in DO concentrations below about 400 m (Supplementary Figure S6).
Generalised additive modelling results
In all the detailed analysis models, where individual values were computed from time-series data for maximum daily depth, average daily depth, and, for BET, daily number of vertical excursions, the model that included the track ID as a random factor, to account for individual variation, proved to be the better model (Table 1 ). The implication is that in all these cases, individual variability has a greater effect on the response variable than either temperature or DO. These results are presented in full in the OSM.
GAM analysis of the spatially derived metrics (derived from grid cell locations) resulted in much higher percentages of deviance explained, and consequently, these models were explored in more detail (Table 1 ).
BET model outputs
For time spent below 55 m, DO at 100 m explained the most deviance at 5.9%. DO at 300 m was the second highest scoring DO variable with 5.67% deviance explained. As these two values were close and much higher than the next best (DO150 at 1.72%), both DO100 and DO300 were used together with SST and temperature at 300 m in the multi-variate models. The model selected by AIC, included DO100, DO300, and SST, and explained 18.9% deviance. The model combining DO100, DO300, SST, and Temp300 explained slightly more deviance (19.7%), but the additional parameter resulted in a lower wAIC value (Supplementary Tables S28, S29 and S30). However, the plot of the GAM model (Supplementary Figure S18) shows a complex relationship with no clear trend.
For BET maximum depth, DO at 150 m and temperature at 200 m were selected as the most important depths for the analysis, explaining 19.6 and 26.5% of deviance, respectively. However, DO at 100 m explained 16.6%, and temperature at all other depths explained between 17.4 and 25.4%. Therefore, although the model selected by AIC was DO150 + Temp200 + SST, explaining 59.7%, it is clear that temperature at all depths is probably important for maximum depth. The GAM plots (Supplementary Figure S20) do not show clear trends, however, rather indicate complex relationships. Maximum depth appears to peak at DO concentrations of around 135 μmol/l, while increasing temperature appears to reduce the maximum depth (Supplementary Tables S31, S32 and S33).
For BET mean depth, the most important depths for DO and temperature were DO300 and Temp150, respectively. In both cases, the deviance explained by these factors was considerably more than factors at other depths. The model selected by AIC, includes DO300, Temp150, and SST, explaining 14.7%. The effect of all variables was complex, however. Over the range of most observations, increasing DO reduces average depth, as does SST. With Temp150, average depth is at a maximum at around 13.75 °C, although the difference between minimum and maximum depths is minimal (~ 80 to ~ 100 m). Overall, average depth is not greatly affected by temperature or DO (Supplementary Tables S34, S35 and S36).
DO at 150 m and temperature at 50 m were the most import factors for the number of BET vertical excursions observed in each grid cell, accounting for 38.1 and 41.3% of the deviance, respectively (Supplementary Tables S37 and S38). However, other depths were also important for both DO and temperature. For DO, the depths of 100 and 200 m accounted for 23.2 and 20.7% of deviance and for temperature all depths of 150 m and below accounted for at least 32% of the deviance. SST was also found to account for 45.3% of the deviance and the model selected by AIC included DO, Temperature, and SST, accounting for 71.6% of the deviance (Supplementary Table S39). The plots (Supplementary Figure S24) suggest that the number of vertical excursions exhibits a strong peak at lower temperatures (~ 23.5 °C), which then tails off. The effect of DO shown in the plot exhibits a strong peak at about 110 μmol/l; however, below ~ 90 μmol/l, there is a fairly constant lower rate, rather than an increase, which might be expected if low DO is driving vertical excursions. Despite the relatively high value of 71% of deviance explained, there is no clear relationship evident in any of the plots of the variables involved. DO at all depths contributes at least 9% of the deviance explained; however, again, there is no clear relationship evident in any of the plots shown in supplementary Figure S25.
YFT model outputs
For time spent below 43 m, DO and temperature at 150 m explained 34.6 and 39.9% of deviance, respectively, making them the most significant variables. DO at all depths below 150 m accounted for ~ 30% deviance and temperature at 100 m accounted for 34.5% deviance, suggesting that DO and temperature throughout the water column are important, rather than at specific depths. SST alone accounted for 19.3% and the model selected by AIC included temperature at 150 m and SST, accounting for 50.2% deviance. There was a clear increase in time below 43 m with increasing temperatures at 150 m (Supplementary Figure S27). The relationship with SST is more complex, with an apparent peak at just over 20 °C. There is a clear relationship between time below 43 m and DO concentrations, with time below increasing sharply once DO exceeds 150 μmol/l.
For maximum depth, DO and temperature at 100 m are the most important factors, accounting for 24.5 and 4.09% of deviance, respectively. DO at 150 m might also be important, accounting for 17.1%. SST accounts for slightly more deviance than temperature at 100 m (5.1%) and the model selected by AIC includes DO and SST, accounting together for 28.4% deviance. The plots (Supplementary Figure S29) indicate peaks in max depth at DO concentrations around 200 μmol/l and temperatures around 23 °C.
For mean depth, DO and temperature at 150 m were the most important factors at 27.3 and 42.7% deviance explained, respectively. However, for DO, all depths below 150 m explained at least 20% deviance and temperature at 100 m explained 38%. Generally, % deviance explained by all depths was higher than with other metrics. SST explained 21.4% and the selected model included temperature and SST, explaining 44.1% of deviance. The plots (Supplementary Figure S31) show that above DO concentrations of 150 μmol/l, average depth increases markedly. Above 10 °C average depth also increases steadily. The SST plot is more complex, with a peak in average depth at just over 20 °C. | Discussion
Here, we investigated the extent to which DO concentrations at depth act as a limiting factor in BET and YFT daytime depth distributions by analysing observed vertical distributions (time-at-depth) with DO and temperature at a range of depths. Determining the role of low DO in shifting fish distributions is important as, in many geographical regions, the distribution of both BET and YFT overlaps known oxygen minimum zones (OMZs) (Schaefer et al. 2007 ; Schaefer and Fuller 2009 ), where biological and hydrographic processes combine to produce hypoxic regions at depth (Stramma et al. 2008a ).
Previous studies (e.g., Schaefer et al. 2009 ) have shown that when in the equatorial eastern Pacific, BET are able to stay for prolonged periods at depth, tracking the deep scattering layer (DSL) only performing brief vertical excursions to shallower, warmer and better oxygenated waters. YFT were found to be searching at similar depths close to the DSL for similar prey items, but spend little time-at-depth, instead tending to perform repetitive bounce dives between deep cold, low DO waters and shallower, warmer and better oxygenated waters (Schaefer et al. 2009 ). This difference in behaviour is likely because YFT lack some the physiological adaptations of BET, such as transactional heat transfer, that allows BET to retain heat in swimming muscles (Holland et al. 1992 ; Boye et al. 2009 ).
The analysis of time-at-DO performed here showed that there was a clear difference between BET and YFT, with BET spending 28% of their time-at-DO concentrations below the proposed 63 μmol/l hypoxic threshold (Breitburg et al. 2018 ) and 68% of their time below 200 μmol/l, while YFT spend 85% of their time above 200 μmol/l. This difference in time-at-DO could simply result from differing patterns of vertical space use, driven by different foraging strategies and target prey species, rather than by limitations imposed by low DO. For example, BET characteristically dive to depths of around 250 to 300 m to forage on vertically migrating prey close to the deep scattering layer, such as myctophids, with fish comprising a greater proportion of the diet than in YFT (Menard et al. 2006 ). YFT, however, although increasing their average depth during daytime hours, restrict most of their activity to above 200 m, (Schaefer et al. 2009 ; Schaefer and Fuller 2010 ; Matsumoto et al. 2013 ; Fuller et al. 2015 ) where crustaceans such as pelagic red crab ( Grimothea planipes ) are more important prey items than with BET (Menard et al. 2006 ). As a result of these foraging behaviours and prey preferences, the time-at-DO and time-at-temperature profiles differ considerably between the two species.
By analysing the time-at-depth profiles from areas (sets of 1-degree grid cells) at the two extremes of DO concentration at depths relevant to the probable foraging depths of the two species, we were able to identify clear threshold depths at which behaviour changed between high and low DO areas. Interestingly, the threshold depths we identified differed from the depths at which we used the DO concentration to separate the 1-degree areas. For BET, we used DO at 300 m and found behavioural thresholds at 55 and 185 m, while for YFT, we used DO at 100 m and found a threshold at 43 m. These differences between the depth used to define low and high DO areas, and the depths at which changes of behaviour were observed suggest that behavioural responses to DO in the water column are complex and that the fish are not simply limited by a particular oxycline. It is also possible that the actual DO levels experienced by the fish might be more heterogeneous than represented in the modelled DO used in the study, a possibility that future work using a tag that measures DO in situ on tagged fish, would help to explore.
By then categorising grid cells by the time spent above and below these behavioural thresholds, we were able to analyse the response to DO in relation to environmental variables at multiple depths, without specifying DO levels or hypoxic thresholds directly. This is important, as comparatively little is known about physiological responses to hypoxia in free-living fish and setting thresholds and expectations a priori could bias the results. Our results confirmed that there was a much stronger correlation between time-at-depth and DO concentrations for YFT than for BET. At all depths, DO was significantly higher in areas where YFT spent more time below the observed 43 m threshold and conversely, in regions where DO at depth was low, YFT spent significantly more time above 43 m, which is as expected if YFT are avoiding areas and depths where DO concentrations were low. We also showed that, for YFT, the median depth was positively correlated with DO concentrations at depth, further supporting either the avoidance of low DO water, or the preference for high DO water, in this species. The results from the GAM analysis support these findings, with DO at 150 m accounting for 35% of deviance in time spent below 43 m and DO at 100 m being the most important factor in determining maximum depths reached (24.5% of deviance). For average depth as well, DO at 150 m explained 27% of the deviance and there was a clear increase in average depth at DO concentrations above 150 μmol/l. We can conclude that DO likely represents a limiting factor for YFT vertical distributions throughout their vertical range, and that, as a result, expanding OMZs are likely to cause shifts in YFT horizontal and vertical distributions in those geographic regions where they overlap.
BET, by contrast, show no clear shift in vertical distribution in relation to DO below depths of around 55 m; instead, there was a trend of decreasing DO at depth in the areas where BET spent more time below 55 m. This is the opposite of what would be expected if BET were avoiding low DO, suggesting that the DO concentration was less important in driving vertical habitat use. BET continued to forage at depth, even at hypoxic DO concentrations, as evidenced by the time-at-DO analysis performed here. Similarly, in the analysis of time spent above and below the second threshold identified of 185 m, we again found no significant differences in DO or temperature at any depth. In the GAM analysis, DO at 100 m explained 5.9% of deviance for time below 55 m, and was shown to be as important as temperature at 300 m or SST, which explained 5.13 and 5.92%, respectively. In combination, DO at 100, temperature at 300 and SST explain 18.9% of the deviance, which suggests that, together, these factors are likely important drivers of the vertical distributions observed. However, the relationship between DO and time below 55 m in the GAM plots was complex and no clear trend emerged. For maximum and average depths as well, the GAM analysis found DO to be the most important factor, but again, the plots did not show a clear trend, as seen with YFT, instead showing a peak at low DO, which could simply be the result of deeper diving encountering lower DO concentrations.
For both species, the GAM analysis provided no unequivocal relationships between any measured behaviour (e.g., maximum depth) and any environmental factor investigated, despite the level of deviance explained suggesting otherwise in some cases (e.g., BET vertical excursions at 71%). To identify changes in behaviour associated with differing DO at depth, it was necessary to compare the extremes of DO (10th and 90th percentiles), which suggests that while differences were identified between species, the relationship between these changes in behaviour and environmental factors are complex and are confounded with many other factors, of which individual variation plays a significant role. In the analysis of individual time-series data, individual variation, possibly influenced to some extent by the differing sizes (fork lengths) of the fish, obscured any other relationships. Intra-specific variation of this nature can often confound the determination of drivers of behaviour and, consequently, many studies (including this one) resort to larger scale approximations from averages or aggregations of population level data (Lubitz et al. 2022 ).
For a high oxygen demand predator like BET to forage at such low DO concentrations, some adaptations to low DO are expected. Some of these are known to be physiological, such as BET haemoglobin having higher oxygen affinity than either yellowfin or skipjack tunas (Lowe et al. 2000 ; Mislan et al. 2017 ). Further, their specific blood chemistry has been shown to release more O 2 when the blood is warmed when re-entering muscle tissue than either yellowfin or skipjack tunas, releasing more O 2 where it is most in demand (Lowe et al. 2000 ). Nonetheless, when ambient DO is at hypoxic concentrations, well below 63 μmol/l, O 2 stored in blood haemoglobin or muscle myoglobin, will eventually be depleted and oxygen debt will be incurred. Consequently, we hypothesised that the characteristic upward vertical excursions performed by BET when foraging would provide an opportunity to replenish blood oxygen, as well as rewarming the body. If this were the case, then we would expect there to be an increase in the number of upward vertical excursions in areas where DO is lower and, indeed, our results supported this hypothesis. BET performed four times as many upward vertical excursions in the lowest DO areas compared to the highest DO areas; however, we found no concomitant significant difference in temperature. By extending the analysis to compare areas where the difference in DO was less (e.g., to 60–40th percentiles), we found that the number of vertical excursions was still significantly different; however, we found no significant difference in temperature between these areas. These results suggest that in areas where DO is below the 40th percentile, DO likely becomes a more important driver of upward vertical excursions, with a principal function being to replenish blood oxygen. Interestingly, the GAM analysis found temperature in shallow (50 m) water to be more important than temperature at depth, explaining 41% of deviance, and with more vertical excursions being performed when surface waters were cooler. As the extent of rewarming that will occur in cooler shallow waters will be less than when the shallow water is warmer, but with cooler water typically having higher DO which will replenish blood O 2 , these results conform to our hypothesis. DO at depth (150 m) was also found to be important, explaining 38% deviance, but here the modelling suggested that more vertical excursions were performed when DO was higher. Again, however, the GAM does not necessarily identify causal relationships and it is possible that more activity took place when DO at depth was higher, possibly because prey was more abundant. More activity would likely result in an increase in the number of upward vertical excursions to repay an oxygen debt, as increased activity will consume more oxygen; however, at the same time, the increased activity would contribute to maintaining body temperature, through the warming of muscle tissue. Consequently, we propose, based on these results, that activity at depth was limited more by DO than by temperature, at least for the individuals in the habitats analysed.
It was noted by Brill ( 2007 ) that the prolonging of the repayment of oxygen debt by low ambient DO, following exercise, might limit habitat suitability, and therefore, levels of DO at depths shallower than the activity depth may also be important to both species. The results presented here appear to support this idea. DO was found to be lower at all depths in locations where the tunas shifted to shallower depths and, for both species, the depth at which a change in behaviour was noted was shallower than the depth at which DO concentrations were used to identify high and low DO areas. The effect of body size (fork length) on the number of vertical excursions performed did not, in this case, support the hypothesis that the increase in thermal inertia resulting from an increase in body size would reduce the number of vertical excursions performed. While the initial investigation seemed to confirm this, it was evident that pooling the data obscured differences between tunas tagged in 2000 compared to later years. Analysing each cohort of fish separately showed that both groups shared a similar response, with increasing body length resulting in a small increase in the number of vertical excursions (3 to 4 more per day per metre of body length). It is possible that this small increase results from an increase in size-specific post-prandial O 2 consumption (specific dynamic action, SDA) related to consumption and processing of a meal (Fitzgibbon et al. 2007 ; Fitzgibbon and Seymour 2009 ). For example, southern bluefin tuna ( Thunnus maccoyii ) were found to increase swimming speed depending on the meal size consumed, leading to an increase in ventilation across the gills, presumably to counter the increase in O 2 consumption resulting from higher SDA (Fitzgibbon et al. 2007 ). However, this would only be the case if the meal consumed was proportionally larger in comparison to body size. Another factor that could lead to larger fish performing more vertical excursions in low DO waters is that, although increased body size acts to retain body heat, the same is not true for oxygen, with larger bodies requiring more oxygen; thus, while larger fish will tend to stay warmer, this would only increase their oxygen requirements. It is most likely that both temperature and oxygen act in concert to determine the specific threshold at which an individual is forced to abandon foraging and ascend to shallower, warmer, oxygen enriched waters to recover, before diving again.
It was interesting and somewhat contrary to expectations that the number and depth of exceptional deep dives for both species was found to be greater in regions where DO at 100 m (for YFT) or 300 m (for BET) was lower. DO concentrations at the depth of the deep dives were found to be slightly but significantly lower for deep dives performed in low DO areas, which given the significantly deeper dives is expected (Table S10). However, this does suggest that the tunas are not diving to below the OMZ. This analysis has unfortunately not added to our understanding of why species such as tuna perform these exceptionally deep dives, and this remains an interesting question for further research.
A limitation of our study was that we could not determine from these observations whether the tunas were shifting to shallower, DO enriched waters because of their own physiological limitations or preferences, or whether they were following the shifting distributions of prey species. Work in the tropical and equatorial Atlantic suggests that the typical mesopelagic prey species of BET, such as myctophids, phosichthyids, and gonostomatids (Bertrand et al. 2002a ; Schaefer and Fuller 2009 ; Karuppasamy et al. 2011 ) may be tolerant of low DO and able to migrate through OMZ regions along with other prey taxa and to remain at depth (Olivar et al. 2017 ). Therefore, the behaviour of BET might represent physiological preferences and limitations. For YFT it might be that the preferred prey species of YFT (juvenile fish and crustaceans) do have a lower tolerance of low DO, and that the vertical distribution of YFT more closely reflects that of its prey (Bertrand et al. 2002b ). A further limitation pertains to the scale and nature of the modelled DO used here, and in the positional accuracy of the estimated tuna locations. The modelled DO at a scale of 0.25-degree (~ 25 km) represents a large-scale homogenous gradient of DO which very likely differs from the nature of actual DO experienced by the tunas, where micro-scale eddies and currents will contribute to increased heterogeneity. This problem of scale is exacerbated by the errors in location accuracy resulting from the light level geolocation, which can be as much as 1 degree of longitude and 2 degrees of latitude (Lam et al. 2010 ). Consequently, it is not possible at present to study in detail the responses of individual fish, and therefore, a larger scale spatial analysis, despite its limitations, was more appropriate.
Nonetheless, the results here suggest that if OMZs increase in volume or shoal to shallower depths, as expected due to further climate-driven deoxygenation, then the habitat occupied by tuna, especially YFT, will also be shifted and compressed, potentially altering their susceptibility to capture, regardless of whether it is the tunas’ DO intolerances or that of their prey that drives the shifts. While tunas represent the higher end of metabolic oxygen demand in water-breathing marine predators, it is likely that many other important marine apex predators, such as marlin, sailfish, and lamniform sharks having high metabolic rates, will be similarly affected by low DO as has been shown recently for ectothermic blue sharks (Vedor et al. 2021b ). | Conclusions
Both species respond to low DO in different ways. BET do not significantly adjust their depths in response to lower DO; however, they do increase the number of upward vertical excursions they perform, which reduces the time available for foraging. YFT, on the other hand, forage in shallower depths when DO is lower; however, whether this is because of YFT’s physiological intolerance of lower DO or a response to hypoxia-induced shifts in prey distributions remains to be determined. With climate-driven decreases in DO at depth, YFT are likely to shift their depth and possibly horizontal distribution as a result. There is also the further possibility that if YFT shift their vertical distributions to shallower depths, this could make them increasingly vulnerable to capture by commercial fishing vessels, particularly purse-seines.
For BET, while activity depth is less likely to be affected, the increased number of upward vertical excursions will reduce time spent at depth and increase time spent in shallower water. As with YFT, if BET spend more time at shallower depths, then there could be an increased susceptibility to capture by longlines or purse-seines. BET might also be affected as a result of prey species’ responses to lower DO; if prey species are forced into shallower, more oxygen-rich water then this habitat compression could benefit BET and reduce the impact due to increased vertical excursions. Again though, this would place BET in shallower water where susceptibility to capture might be increased. There could therefore be multiple detrimental effects on survival and reproductive capacity for both species, exacerbating the existing impacts from industrialised fishing. The increased occupancy of shallower waters predicted here should be accounted for in stock assessments as well as in mitigating their increased vulnerability to fishing. Future research would benefit considerably from tagging studies using a tag that can measure DO in situ, so that the actual DO concentration encountered by the fish could be determined. Not only would the fish be acting as oceanographers, providing accurate information on the heterogeneity of DO at depth, but the data would allow actual DO tolerances and preferences to be determined, thus making a significant contribution to our understanding of the impact of expanding OMZs on marine ecosystems. | Responsible Editor: H.-O. Pörtner.
Oxygen minimum zones in the open ocean are predicted to significantly increase in volume over the coming decades as a result of anthropogenic climatic warming. The resulting reduction in dissolved oxygen (DO) in the pelagic realm is likely to have detrimental impacts on water-breathing organisms, particularly those with higher metabolic rates, such as billfish, tunas, and sharks. However, little is known about how free-living fish respond to low DO environments, and therefore, the effect increasing OMZs will have cannot be predicted reliably. Here, we compare the responses of two active predators (bigeye tuna Thunnus obesus and yellowfin tuna Thunnus albacares ) to DO at depth throughout the eastern Pacific Ocean. Using time-series data from 267 tagged tunas (59,910 days) and 3D maps of modelled DO, we find that yellowfin tuna respond to low DO at depth by spending more time in shallower, more oxygenated waters. By contrast, bigeye tuna, which forage at deeper depths well below the thermocline, show fewer changes in their use of the water column. However, we find that bigeye tuna increased the frequency of brief upward vertical excursions they performed by four times when DO at depth was lower, but with no concomitant significant difference in temperature, suggesting that this behaviour is driven in part by the need to re-oxygenate following time spent in hypoxic waters. These findings suggest that increasing OMZs will impact the behaviour of these commercially important species, and it is therefore likely that other water-breathing predators with higher metabolic rates will face similar pressures. A more comprehensive understanding of the effect of shoaling OMZs on pelagic fish vertical habitat use, which may increase their vulnerability to surface fisheries, will be important to obtain if these effects are to be mitigated by future management actions.
Supplementary Information
The online version contains supplementary material available at 10.1007/s00227-023-04366-2.
Keywords | Supplementary Information
Below is the link to the electronic supplementary material. | Acknowledgements
The bigeye tuna archival tag data sets utilized in this study were obtained from tagging experiments made possible by the generous financial support of Japan Fisheries Agency and the Taiwan Fisheries Agency. The yellowfin tuna archival tag data sets utilized in this study were obtained from tagging experiments made possible by the Tagging of Pacific Pelagics (TOPP) program, the owners, crew, and passengers aboard the FV Royal Star and FV Shogun, and permits provided by the Government of Mexico.
Author contributions
Conceptualization: NEH and DWS. Data curation: DWF and KMS. Formal analysis: NEH. Funding acquisition: DWS. Project administration: DWS. Resources: DWF and KMS. Software: NEH. Supervision: DWS. Writing—original draft: NEH. Writing—review and editing: DWF, KMS, and DWS.
Funding
Funding for data analysis was provided by a UK Natural Environment Research Council (NERC) Discovery Science Grant (NE/R00997/X/1) and a European Research Council Advanced Grant (ERC-AdG-2019 883583 OCEAN DEOXYFISH), both to D.W.S. D.W.S. was supported by a Marine Biological Association Senior Research Fellowship.
Data availability
The datasets generated during and/or analysed during the current study are not publicly available as they are owned and archived by the Inter-American Tropical Tuna Commission, but are available from the corresponding author on reasonable request.
Declarations
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose.
Ethical approval
All fish were captured and handled following the guidelines outlined by the National Institutes of Health (NIH), international guiding principles for biomedical research involving animals (NIH 2012 ). | CC BY | no | 2024-01-15 23:41:53 | Mar Biol. 2024 Jan 13; 171(2):55 | oa_package/85/90/PMC10787700.tar.gz |
PMC10787706 | 38217782 | Background
Subjective views of ageing and informal caregiving
Subjective views of ageing can include perceptions of age in general or of one’s own ageing process (Chasteen and Cary 2015 ; Wurm and Westerhof 2015 ). In this study, we analyse attitudes towards one’s own ageing (ATOA) and subjective age (SA) as indicators of views of one’s own ageing process, i.e. personal views. Both measure different aspects of views of ageing, but are related with each other (Bodner et al. 2017 ; Diehl et al. 2014 ). SA is active on a conscious level while ATOA is already active on a sub- and preconscious level and also includes affective, cognitive and evaluative factors which reflect internalized societal as well as individual attitudes (Diehl et al. 2014 ; Hess 2006 ). Onset of old age (OOA) was included as indicator of views of age in general as an addition to the aforementioned personal views (Shrira et al. 2022 ).
A worsening of these views of ageing can impact health, well-being, and longevity negatively, while more positive views of ageing can be beneficial to these outcomes (Alonso Debreczeni and Bailey 2020 ; Chang et al. 2020 ; Kotter-Gruhn et al. 2009 ; Westerhof et al. 2014 , 2023 ). Thus, better, respective, improving views of ageing are of relevance to informal caregivers, that is, to relatives or friends providing unpaid support to individuals with care needs, who often report worse health and well-being due to their care performance (Bom et al. 2019 ; Zwar et al. 2018 ).
Only very few studies have analysed the association between informal caregiving and views of ageing and findings point in both directions, improvement and worsening of views of ageing. For example, one study pointed towards a worsening in the attitude towards older adults among caregivers (Luchesi et al. 2016 ), while another study indicated more positive attitudes towards ageing among caregivers (Loi et al. 2015 ). A worsening of views of ageing is in line with terror management theory applied to ageing (TMT-A; Martens et al. 2005 ). The new experiences and increased confrontation with impairments and dependency could remind caregivers of their own vulnerability and mortality, showing them a fate they may share eventually. This can result in distress, disgust and wishing to avoid or devalue these reminders (i.e. negative subjective views of ageing) and activate their negative age views. However, reminders of mortality may also change goals, as socioemotional selectivity theory indicates (SST, Carstensen et al. 1999 ; Löckenhoff and Carstensen 2004 ). People who perceive themselves closer to death, focus more on emotionally fulfilling and meaningful goals, such as improving relationships and feeling valued. This is in line with the findings of our own previous work. In the previous study we analysed onset and end of caregiving and found that these are associated differently with views of ageing in the group of caregivers aged ≥ 80 years (Zwar et al. 2022 ). Older caregivers benefited at the onset in terms of better views of ageing (more positive ATOA) but not at the end of care (higher SA). Thus, caregiving may have emphasized more positive aspects of caregiving and fulfilled more emotionally meaningful goals.
In sum, previous research already points to an association between informal caregiving and changes of views of ageing; however, more research is still needed to understand these mechanisms. It remains unclear which aspects of the caregiving performance are relevant to the different indicators of views of ageing. Therefore, we intend to build on and expand our previous work with this study, in which we aim to identify aspects of the care situation, which motivate changes in views of ageing among caregivers. We assume that intensity of informal caregiving in particular is of importance.
So far, very few studies have focused on specific aspect of care and their association with views of ageing. First findings indicated that lower caregiving burden is associated with lower anxiety of ageing (Hamama-Raz et al. 2022 ). An effect of the burden of caregiving among adult children on the views of ageing of their care-receiving parents was found, although in these dyadic analyses burden was not associated with perceptions of own ageing among the caregiving adult children (Kim et al. 2023 ). More research is therefore needed that analyses changes in views of ageing in association with different aspects of caregiving intensity. Our findings will highlight how caregiving could be designed to support positive or at least prevent negative views of ageing and could be very helpful due to the relevance of views of ageing for health and well-being (Tully-Wilson et al. 2021 ; Westerhof et al. 2023 ).
The role of care intensity for views of ageing
Higher intensity of caregiving is associated with worse health and psychosocial well-being (Bremer et al. 2015 ). In terms of more care hours and tasks, it is usually associated with more support needs (Rodríguez-González et al. 2021 ). Thus, higher care intensity provides more opportunity for confrontation with dependency and illness, and reminders of mortality. Moreover, higher care intensity may activate more age-related stereotypes, such as relating exhaustion or tiredness due to caregiving to age. Therefore, we expect higher intensity to be a relevant predictor for changes in views of ageing.
We aim to analyse different indicators of caregiving intensity, namely hours of care per week, range of care tasks and burden of care. While these factors are related, they focus on different aspects of intensity. The range of care tasks indicates the diversity in care provision and thus is more a qualitative aspect of intensity, while caregiving time is more of a quantitative indicator. Both are also objective indicators of care intensity. A subjective indicator is caregiver burden. Burden reflects the level of care-specific stress and provides insight into the subjective perception of care intensity (Graessel et al. 2014 ), with which it is associated (Rodríguez-González et al. 2021 ). Analysing all indicators as possible predictors will provide us with information which of these factors may be most important for views of ageing.
We also assume that age and gender may play a role for these associations. Caregivers who are 65 years or older may be affected differently by aforementioned effects, than those aged 65 years and younger. Older age is often associated with an increased range of age-specific cues compared to younger age. In line with our findings from the previous study (Zwar et al. 2023 ) and with SST indicating socioemotional goals to be more important (Carstensen et al. 1999 ; Löckenhoff and Carstensen 2004 ), we assume that older caregivers may also benefit more from caregiver intensity regarding their personal views of ageing, at least in terms of care tasks and time than younger caregivers.
We also expect female caregivers to be affected more strongly by any associations. Women usually spend more hours on caregiving, provide more care tasks than men and experience higher caregiver burden (Pinquart and Sörensen 2006 ; Stanfors et al. 2019 ; Zygouri et al. 2021 ). Gender differences in views of ageing have been inconsistent; however, they indicate that women are usually more worried about old age and have a less favourable perspective on their ageing (Ayalon 2014 ; Bai 2014 ; Barrett and Von Rohr 2008 ). Thus, they may be more vulnerable to the activation of age stereotypes by age-specific cues such as informal caregiving and may therefore be affected more in terms of larger changes due to informal caregiving intensity than male caregivers. | Method
Sample
Data from wave 2014 and 2017 of the population-based German Ageing Survey from the German Centre for Gerontology were used (DZA, 2014, 2017). This is a cohort-sequential panel representing community-dwelling adults aged 40 years and older in Germany who were surveyed by means of an interview and an additional written questionnaire covering sensitive topics. The sample is extended every 6 years with a new sample drawn with a two-stage sampling method, stratified by age, gender and region. Earlier waves were excluded because they did not include all of the analysed variables (e.g. ATOA). We included all participants who provided informal care to a person with health-based care needs to an adult (caregivers for children or grandchildren were excluded; ‘Are there any persons who, due to their poor state of health, are looked after or cared for by you privately or on a voluntary basis, or for whom you provide regular help on a regular basis?’) and who had participated in interview and questionnaire ( N = 2162). To analyse if changes in the predictors were associated with changes in the outcomes, we used Fixed Effects (FE) regression analyses, which includes only those participants for the estimation, who have experienced a change in the analysed variables (average treatment effect on the treated, ATET; Brüderl 2010 ). Written informed consent was provided by all participants. The criteria of the German Research Foundation for an ethics vote do not apply; therefore, an ethics vote was not needed and not applied for (Deutsche Forschungsgemeinschaft, 2010–2021).
Variables
Main predictors
Caregiving time was measured as hours per week (‘How much time do you spend per week helping the person you support?’, Range: 0 to 168 h per week). Informal caregivers were asked ‘What help and support do you provide?’, in terms of household help, supervision and support, nursing care tasks or other care tasks. These care tasks summed up in a variable which provides information in how many of these areas caregivers provided support, resulting in our range of care tasks variable (Range: 0–4), thus, indicating the range or diversity of caregiving. Caregiving burden was measured by asking caregivers to consider all care and support they provide and evaluate how burdened they are by this performance (‘If you look at these aids or care services as a whole, how much of a burden do they place on you?’, Range: 1 not at all – 4 very much).
Outcomes
The German version of the subscale attitude towards one’s own ageing (ATOA) from the Philadelphia Geriatric Center Morale Scale (PGCMS; ‘The older I get, the worse everything becomes’, ‘Have same energy as last year’, ‘The older I get, the less useful I am’, ‘The older I get, life is better than expected’, ‘Now as happy as in younger years’, Range: 1–4) (Lawton 1975 ; Liang and Bollen 1983 ) was used. This is a reliable and well-established scale in research on perceptions of ageing (Cronbach’s α = 0.75–0.76; Kotter-Gruhn et al. 2009 ; Westerhof et al. 2014 ; Wurm et al. 2014 ). The items were poled so higher scores indicate a more positive perception of one’s own ageing and a mean score was calculated based on its 5 items (Range: 1–4; Beyer et al. 2015 ). Subjective age (SA) refers to how old people feel (‘Apart from your actual age: If you are to express it in years, how old do you feel?’). We treated all values three standard deviations above and below the sample mean score as outliers and excluded them, in line with procedures performed in previous research (Stephan et al. 2015 ; Weiss and Lang 2012 ). Onset of old age (OOA) was measured by asking people at what age they would consider someone as being old (At what age would you describe someone as old?). For this measure, we also excluded outliers, which were three standard deviations above and below the mean score.
Covariates
The caregiver’s sociodemographic background and health were measured. Chronological age was measured as continuous (beginning at 40 years) and dichotomous variable (middle-aged: < 65 years; older; ≥ 65 years). Gender included male and female as categories. Marital status (married and living together or separately vs. divorced, widowed or single) and employment status (employed vs. currently not employed, including retired and unemployed individuals) were measured with dichotomous variables. Health was measured in terms of self-rated health (Range: 1–5, higher values indicate worse health) and number of chronic illnesses (e.g., diabetes, cardiovascular disease; count score, Range: 0–11).
Statistical analysis
We conducted FE regression analysis in this study (Brüderl 2010 ; Wooldridge 2010 ). With longitudinal data, unobserved heterogeneity can be differentiated into a time-constant and a time-varying (idiosyncratic) error. FE regression analysis are based on the assumption that the time-constant error is associated with the analysed variables and could severely bias the estimated parameters. Therefore, the method focuses only on time-varying factors and controls for all time-constant observed and unobserved variables (e.g., genetic disposition, gender). As a result, only time-varying covariates have to be controlled to fulfill the assumption that the idiosyncratic error is not associated with the analysed variables. This assumption is much weaker than assumptions of other panel analysis methods, such as Random or Mixed Effects methods, which rely on the assumption that the analysed variables are not associated with any error, time-constant or time-varying. Since this assumption is rarely fulfilled, estimates can be severely biased. The weaker assumptions of FE regression analysis are more likely to be fulfilled and enable the estimation of consistent parameters, i.e. enable to estimate the true (unbiased) value of the parameter. This is a major advantage in research with observational panel data. Results from Sargan–Hansen tests (Schaffer and Stillman 2016 ) support our decision to use FE regression analysis (results available upon request).
Since the method focuses only on time-varying factors, only participants who varied in the analysed variables are used for the estimation of the regression coefficients (ATET; Brüderl and Ludwig 2015 ). We used the xtsum command to check for variation in the continuous predictor variables. This is a command from the statistical software Stata that is used for longitudinal data (xt) and provides information on mean values and standard deviation. To reduce the risk of bias by serial autocorrelation and heteroscedasticity, we calculated robust standard errors (Cameron and Trivedi 2009 ). The sample of our analyses contained only very few missing values (below 5%); thus, listwise deletion was used (Allison 2001 ).
All models were adjusted for the caregiver’s health and sociodemographic data, except for gender and education. As time-constant variables they would be omitted during estimation of the FE regression analysis. Age and gender were used as moderators, i.e. we analysed interaction effects between dichotomized age, respective, gender, and the three caregiving intensity indicators, and both variables were used for stratification in further analyses. Age was dichotomized into two groups (middle-age: 40 to 64, old age: 65 years and older), to analyse if both groups of caregivers experience different associations between caregiving intensity and views of ageing. Both groups are representing a different group of caregivers as can be seen in the description of the sample in the supplementary data (Additional file 1 : Table S1). All analyses were conducted with the statistical software Stata 16.0 (Stata Corp., College Station Texas). The level of significance was set at alpha 0.05. | Results
Description of the sample
The complete sample included 2162 informal caregivers (49.07% caring for parents, 23.59% for spouses or partners, and 26.97% for other related or non-related adults). They were on average aged 64.25 (± 10.25) years and 59.02% were female, and 95.42% had no migratory background. On average, they provided eleven hours of care per week (± 18.62) and were involved in 2.41 care task areas, primarily in supervision and support (83.02%). Level of burden was moderate ( M = 2.14, SD = ± 0.86). SA was on average 56.29 (± 11.76) years, ATOA was M = 3.00 (SD = ± 0.53) and OOA was perceived at 75.10 (± 8.16) years of age. Further information on the complete and the subsamples are given in the Additional file 1 : Table S1.
Results of analysing the association between caregiving intensity and views of ageing
Using the complete sample (Table 1 ), FE regression analysis indicated a significant association between care time and increased SA ( b = 0.06, p < 0.05). The number of care tasks was significantly associated with increased ATOA ( b = 0.07, p < 0.001) and an earlier onset of old age ( b = − 0.99, p < 0.01). No significant associations were found between caregiver burden and ATOA ( b = − 0.03, p = 0.34), SA ( b = 0.13, p = 0.81) and OOA ( b = − 0.74, p < 0.10). No significant associations were found between care time and ATOA ( b = − 0.00, p < 0.10) and OOA ( b = 0.01, p = 0.67), and there was also no significant association between care tasks and SA ( b = − 0.26, p = 0.43).
Moderator analyses with age indicated a significant interaction effect between age and burden ( b = 0.13, p < 0.05) for the outcome ATOA (Table 2 ). The other interaction effects of the models analysing the outcome ATOA were not significant (care time × age: b = 0.00, p = 0.90; care tasks × age: b = − 0.01, p = 0.82). The interaction effects in analysis with the outcome SA (care time × age: b = − 0.11, p = 0.28; care tasks × age: b = − 0.91, p = 0.17; care burden × age: b = − 0.78, p = 0.45) and OOA (care time × age: b = 0.05, p = 0.51; care tasks × age: b = 0.19, p = 0.79; care burden × age: b = −0.01, p = 0.99) were not significant either. In additional stratified analyses (Additional file 1 : Table S2), burden was associated with less positive ATOA among middle-aged caregivers ( b = − 0.08, p < 0.10) and more positive ATOA among older caregivers ( b = 0.03, p = 0.44), both non-significant associations. The number of care tasks was significantly associated with more positive ATOA ( b = 0.08, p < 0.01) and earlier onset of old age ( b = − 1.24, p < 0.05) among middle-aged informal caregivers. Among older caregivers, care time was significantly associated with less positive ATOA ( b = − 0.00, p < 0.05) and higher SA ( b = 0.05, p < 0.05), while care tasks were significantly associated with more positive ATOA ( b = 0.08, p < 0.01). For further information on the stratified analyses, see Additional file 1 : Table S2.
Moderator analyses with gender as moderator (Table 2 ) indicated a significant interaction effect between gender and care time ( b = − 0.01, p < 0.05) for the outcome ATOA and between gender and care tasks ( b = − 1.82, p < 0.05) for SA. The other interaction effects for outcome ATOA (care tasks × gender: b = 0.04, p = 36, care burden × gender: b = 0.00, p = 0.97), SA (care time × gender: b = − 0.04, p = 0.43; care burden × gender: b = 0.09, p = 0.93) and OOA (care time × gender: b = 0.05, p = 0.30; care tasks × gender: b = −0.43, p = 0.57; care burden × gender: b = 0.21, p = 0.82) were not significant. In additional stratified analysis (Additional file 1 : Table S3), we found a significant associations between caregiving time ( b = − 0.01, p < 0.001) and ATOA among female caregivers but not among male caregivers. Among female caregivers, we also found significant associations between caregiving tasks ( b = 0.09, p < 0.001) and more positive ATOA, but not among male caregivers. Further analysis indicated significant associations between care tasks and lower SA ( b = − 0.77, p < 0.05) and earlier OOA ( b = − 1.16, p < 0.01) among female caregivers. Among male caregivers, caregiving time was significantly associated with higher SA ( b = 0.09, p < 0.05), while the association between caregiving tasks and SA ( b = 1.11, p < 0.10) was non-significant. For further information on these stratified analyses, see Additional file 1 : Table S3.
Sensitivity analyses with type of care tasks and a discrepancy score of subjective age were conducted and can be found in the Additional file 1 : Tables S4 and S5. | Discussion
This study explored if specific aspects of the care situation could affect informal caregivers’ views of ageing and if this differed as a function of caregiver’s age and gender. To answer these research questions, the number of caregiving hours, range of care tasks and level of care burden were analysed in association with ATOA, SA, and OOA. Findings indicate that all three aspects of caregiving were associated with views of ageing in different ways. Whether they were positively or negatively associated varied with the age and gender of the caregiver.
Our findings partially confirm our expectations and add to previous findings (Loi et al. 2015 ; Luchesi et al. 2016 ; Zwar et al. 2022 ) by showing that specific aspects of the caregiving performance are associated with views of ageing in unique ways. SA was higher among informal caregivers with increasing hours of care per week. Also, the perception of age worsened as indicated by an earlier OOA in association with caregivers providing a broader range of care tasks. This could be because a broader range of care tasks likely reflect a broader level of care needs of the care recipient. Thus, more care intensity seems to bring one’s own age, closeness to old age, and age-related associations to the forefront of one’s mind. This negative change of views of ageing, in particular of SA, may endanger their health and well-being as indicated by previous findings (Alonso Debreczeni and Bailey 2020 ; Kotter-Gruhn et al. 2009 ; Westerhof et al. 2014 ).
However, a broader range of care tasks performed by caregivers was associated with more positive ATOA. Providing more diverse care tasks likely indicates a broader range of care needs of the care recipient but it may also highlight the caregivers own diverse abilities and therefore improve the perceptions of their own ageing process. Sensitivity analyses showed that the type of care task is also of relevance, namely household help is connected with more positive ATOA while nursing care tasks, i.e. personal care, was associated with earlier OOA. Further research on this and further care tasks is recommended.
The significance of the caregivers’ age and gender
Aforementioned associations differed among caregivers based on their chronological age and gender. Burden, which was not directly associated with the outcomes, was associated with the caregiver’s ATOA as a function of chronological age. This perception of one’s own ageing worsened significantly more among middle-aged caregivers than among older caregivers with increasing burden. As an indicator of stress (Graessel et al. 2014 ), higher burden indicates more difficulties and a more negative evaluation of one’s ability to cope with caregiving, which could strengthen the salience of age-related cues and activate associated stereotypes of ageing (Levy 2009 ). These could highlight the caregiver’s own age-related limits and raise further concerns about their current and future ageing process. However, older caregivers may focus more on emotionally relevant goals in line with SST (Carstensen et al. 1999 ; Löckenhoff & Carstensen 2004 ). Our findings are in line with this. The positive aspects of caregiving, such as strengthening the relationship with the person in need of care, seemed to be more important for the evaluation of their own ageing than the burden of caregiving and prevented a worsening of ATOA. Further research on this is recommended.
In the stratified analysis we found further significant associations among the two age groups. Since they did not differ significantly (no significant interaction effects), these findings have to be interpreted with caution. Still, they provide further interesting insights. More diversity of care tasks improved ATOA while resulting in an earlier OOA among middle-aged caregivers. As explained before, this variety in care provides a more intense confrontation with possible age-related factors and therefore worsens the views of old age in general. However, the variety of care tasks can also provide a more nuanced contrast between one's own abilities and that of the cared-for, therefore resulting in a more positive evaluation of one’s own ageing (in terms of ATOA).
Among older caregivers, more diversity of care tasks had only a beneficial effect on the perception of their own ageing process (ATOA). This group may already be aware of difficulties that can occur with older age. Performing a broad range of care tasks may thus primarily highlight their own skills and actually negate many age stereotypes on diminished abilities and functions (Chasteen and Cary 2015 ). However, more caregiving time still worsened ATOA. This highlights that qualitative and quantitative aspects of caregiving intensity can have different consequences and should be analysed separately as done in this study.
Gender was also a significant factor for these associations. First, our findings pointed out that female caregivers were affected more in their views of ageing than male caregivers when aspects of the caregiving situation changed. This confirmed our expectations. Second, findings indicate that the pattern of change was more complex among women than among men.
Male caregivers perceived themselves as older (SA) when providing more hours of care and a broader range of care tasks. While they only differed significantly from women in the latter, these findings indicate that, for men, more caregiving in any form (time or tasks, i.e. quantitative and qualitative intensity) seems to be negatively affecting their views of ageing. In previous research male caregivers often reported difficulties with caregiving and having to learn new skills, such as cooking, cleaning and personal care (Russell 2007 ). They were also less likely to be involved in personal care (Pinquart and Sörensen 2006 ; Zygouri et al. 2021 ). More diversity of tasks may thus not provide more variety and highlight one’s own abilities, as found among female caregivers. Instead, it may only increase the challenge of caregiving and feelings of being overwhelmed, as previous findings indicated, resulting in worse views of ageing.
In contrast, among women, diversity of care improved the perceptions of one’s own ageing but worsened the views of ageing in general (OOA). Also, more caregiving hours worsened the perception of their own age, though it was their attitudes which changed and not their SA as found among men. Thus, while men feel older, women judge their own age and associated abilities more negatively. In sum, qualitative and quantitative aspects of care intensity have different effects on women and men. While men experience negative effects from both, women experience negative effects but can also benefit in particular from the qualitative aspect of care intensity, i.e. the diversity of care regarding their personal views of ageing.
Limitations and advantages of the study
A few limitations of the study need to be discussed. Based on the range of our outcome, the changes we observed (i.e. the regression coefficients) were mostly small. Still, the findings provide evidence for the significance of caregiving to views of ageing. We measured burden with a single item construct. Further research is recommended which uses an instrument which allows for a more detailed assessment of the caregiving burden. Reverse causality cannot be excluded with the FE regression models. Also, panel attrition occurred (follow-up rates: 2014 38%, 2017 63%). However, this occurred due to age, gender, education and health (Schiel et al. 2018 ), which were all controlled explicitly or implicitly in our analysis. Additionally, the use of FE regression analysis has the advantage of accounting as well for all other unobserved time-constant variables which may be responsible for panel attrition or may be associated with the analysed variables (Brüderl 2010 ; Wooldridge 2010 ). Thus, the study has various advantages and can provide a good basis for future research and practical implications for influencing positive views of ageing positively among different groups of informal caregivers. It is the first study which analyses these associations with a longitudinal design and well-established instruments on views of ageing (such as PGCMS). The large population-based panel sample and the use of FE regression analysis are major advantages, which allow to significantly reduce the danger of bias by unobserved heterogeneity and enable consistent estimates. Also, the findings add to existing theoretical frameworks on views of ageing and highlight the significance of sociodemographic factors. | Conclusion
In sum, this study’s findings provide new insight into views of ageing among informal caregivers, as well as age and gender differences, which highlight the need for different strategies to modify care performance to prevent a deterioration of views of ageing.
The findings show that informal caregivers could benefit in terms of better personal views of ageing from a reduction of the hours of care. Thus, sufficient and affordable professional care services are needed to ease the strain on caregivers and prevent negative changes in their views of ageing. Additionally, care performance should be modified in terms of reducing the hours but not the care tasks, since diversity of care tasks was associated with more positive attitudes towards their own ageing. For example, taking turns with professional care providers, such as using day or night care services, could be helpful. Also, integrating more professional care services to facilitate caregivers in carrying out a broad array of tasks could be helpful.
As further analyses pointed out, these suggestions should be adapted based on gender and age of informal caregivers. Based on our findings, we recommend to improve opportunities for diversity in care in all age groups of caregivers while focusing on a reduction of caregiving hours specifically for older caregivers. Quantitative caregiving intensity seems to be particularly problematic for this group’s views of ageing. In middle-aged caregiver groups, decreasing burden would be helpful for the perception of their own ageing process. This could be achieved, for example, by training caregivers in a broader range of coping strategies.
Also, male and female caregivers would benefit from decreasing the number of care hours per week, as mentioned above. Since enabling diversity in care tasks seems to be only helpful for female caregivers’ views of ageing, reduction of care hours should not be achieved by reducing the range of care tasks among female caregivers and leaving them to provide, for example, only personal care, which is often the care task female caregivers are mainly involved in (Pinquart and Sörensen 2006 ; Stanfors et al. 2019 ; Zygouri et al. 2021 ). Instead, designing care support that reduces intensity not diversity, especially among older and female caregivers, is recommended, to foster more positive personal views of ageing. | Responsible editor: Morten Wahrendorf.
We analysed whether care time, burden and range of caregiving tasks were associated with informal caregivers’ subjective views of ageing (measured as attitudes towards own age (ATOA), subjective age (SA), and onset of old age (OOA)), and whether these associations differed as a function of the caregivers’ age and gender. Adjusted cluster-robust fixed effects regression analyses were conducted with gender and age as moderators using data of informal caregivers (≥ 40 years) of the population-based German Ageing Survey (2014, 2017). All three aspect of care intensity were associated with changes in subjective views of ageing and this pattern was a function of the caregiver’s age and gender. Care time was significantly associated with higher SA. Care tasks were significantly associated with more positive ATOA and earlier OOA. Age moderated the association between burden and ATOA, with older adults reporting more positive ATOA. Gender moderated the association between care time and ATOA; women reported less positive ATOA than men with increasing care time, but also felt subjectively younger than men with a broader range of care tasks. Age- and gender-stratified analysis indicated further differences. Our findings suggest to reduce care time, especially among older and female caregivers, to prevent a worsening of views of ageing, while being involved in a broad range of care tasks seems to (only) benefit female caregivers.
Supplementary Information
The online version contains supplementary material available at 10.1007/s10433-023-00797-4.
Keywords
Open Access funding enabled and organized by Projekt DEAL. | Supplementary Information
Below is the link to the electronic supplementary material. | Acknowledgements
The study was not preregistered. Data from the German Ageing Survey were used. It is available for scientific, noncommercial use for researchers free of charge and can be applied for via the website of the German Centre of Gerontology ( https://www.dza.de/forschung/fdz/deutscher-alterssurvey ).
Author contributions
LZ contributed to conception, design, and analysis of the data and drafted the manuscript. HHK and AH contributed to review and editing and revised the manuscript critically for important intellectual content. All authors have read and approved the final manuscript.
Funding
Open Access funding enabled and organized by Projekt DEAL. We acknowledge financial support from the Open Access Publication Fund of UKE-Universitätsklinikum Hamburg-Eppendorf and DFG–German Research Foundation.
Declarations
Competing interests
The authors declare that they have no competing interests. | CC BY | no | 2024-01-15 23:41:53 | Eur J Ageing. 2024 Jan 13; 21(1):4 | oa_package/be/c2/PMC10787706.tar.gz |
PMC10787715 | 38217669 | Background
Frailty is a medical condition characterized by decreased physiological reserve. Recent studies have found that frail patients have a reduced ability to cope with stress, including surgery, which has a strong correlation with postoperative morbidity and mortality in older patients [ 1 – 3 ]. The exact mechanism associated with increased mortality in frail patients is yet to be fully elucidated; however, the involvement of decreased sympathetic reserve, manifested as lesser hemodynamic variation, has been suggested [ 4 ].
The surgical Apgar score (SAS) was developed in 2007 to identify patients immediately after surgery who are at a higher risk of experiencing major complications or death within 30 days post-surgery [ 5 ]. This novel risk index that integrates three intraoperative parameters (mean blood pressure, heart rate, and blood loss volume) is suitable for routine clinical use. While validation studies across various surgical fields have been published, the presence of frailty has not been taken into consideration in these studies [ 6 – 9 ]. Therefore, limited evidence is available regarding the effects of frailty, with its associated lesser hemodynamic variation, on the SAS, which serves as a reflection of surgical invasion and stress.
We aimed to investigate the potential association between preoperative frailty and the SAS following abdominal cancer surgery. Additionally, the impact of a lower SAS on postoperative complications and hospital stay duration was also assessed. | Methods
Ethical approval
This study is a secondary analysis of a prospective observational study, which focused on the effects of 3-month postoperative recovery as measured by the quality of recovery-15 in hospital on disability-free survival. This study was approved by the Nara Medical University Institutional Review Board (Kashihara, Nara, Japan; Chairperson, Prof. M Yoshizumi; approval number: 2975; 28 April 2021). The statistical protocol of this secondary analysis was approved on 17 August 2023; Kashihara, Nara, Japan; Chairperson, Prof. M Yoshizumi; approval number: 2975).
Inclusion and exclusion criteria
Our initial study, which focused on the effects of postoperative recovery as measured by the quality of recovery-15 in hospital on disability-free survival three months later, included a total of 230 patients aged 65 years or older who underwent elective major abdominal surgery with a cancer diagnosis [ 10 ]. Among them, patients without atrial fibrillation and cardiac pacemakers were included in this study.
Data collection
We collected various preoperative patient characteristic data, including co-morbidities, daily medications, and frailty at the perioperative management center where patients underwent medical interviews and were scheduled for surgery. Frailty was assessed using the Fried Frailty Phenotype Questionnaire, including five domains (fatigue, resistance, ambulation, inactivity, and loss of weight) with the total score ranging from 0 to 5 points [ 11 ]. Patients with a total score ≥ 3 were classified as frailty patients [ 11 ]. In terms of intraoperative data, we collected information on anesthetics used, total administered dose of ephedrine and phenylephrine, total fluid volume, surgical field, postoperative analgesia, surgical duration, and SAS. The SAS, with a total score of 0 (bad) to 10 (excellent), was calculated based on the following three parameters: lowest mean blood pressure (0–3), lowest heart rate (0–4), and blood loss volume (0–4) [ 5 ].
Anesthetic management
Daily oral medications used by the patients were continued except for angiotensin receptor blockers and angiotensin-converting enzyme inhibitors. No pre-surgery medication was administered on the day of surgery. Patients were allowed to have clear water orally up to two hours before entering the operating room. Intraoperative management, including the insertion of an arterial catheter, fluid therapy, and choice of cardiovascular agents, was determined by the attending anesthesiologist. Mean arterial blood pressure values were recorded at 2.5-minute intervals (when blood pressure was measured using oscillometry) or at 1-minute intervals (when an arterial catheter was used).
Outcomes
The primary outcome of this study was the SAS. Secondary outcomes were postoperative severe complications, defined as a Clavien–Dindo classification ≥ 3 [ 12 ], and length of postoperative stay.
Statistical analysis
Continuous data are presented as median [1st quartile, 3rd quartile], and categorical variables are presented as number (%). Univariate analysis was performed using the Mann–Whitney U test or Fisher's exact test as appropriate, to compare the two groups (robust vs. frailty). To assess the effect of SAS on postoperative severe complications and length of postoperative hospital stays, a cut-off value of SAS 7 was determined because of median SAS scores in patients with and without frailty were 7 and 8. Subsequently, the secondary outcomes were compared between patients with SAS ≤ 7 or >7.
Since this study involved a secondary analysis, sample size calculation was not performed. However, as an alternative, we performed a post hoc power analysis using G*power version 3.1 (Faul, Erdfelder, Lang, & Buchner, 2007) with a type I error of 0.05 and effect size of 0.5 (large effect size). With these parameters and the existing number of patients (robust = 165 and frailty = 45), the power was determined to be 0.82 to detect a significant difference. IBM SPSS Statistics (version 25.0; IBM Corp., Armonk, NY) was used to analyze all data, and p-values < 0.05 were considered statistically significant.
One post hoc analysis using nonlinear restricted cubic splines in the regression model was performed to confirm the nonlinearity of SAS for secondary outcomes. | Results
Out of the initial 230 patients, a total of 210 patients were included in this study (Fig. 1 ). Among them, 165 patients were classified as robust and 45 as frail. There were no statistically significant differences in preoperative characteristics between the two groups, except for sex (P = 0.01), serum albumin (P < 0.001), and blood loss volume (p=0.01) (Table 1 ). The distribution of SAS is shown in Fig. 2 and Supplemental Table 1 . The median [1st quartile, 3rd quartile] values of SAS were 7.0 [7.0, 8.0] and 8.0 [7.0, 8.0] in patients with or without frailty, respectively, which was statistically significant ( P = 0.03) (Table 1 ).
Patients with SAS ≤ 7 had a higher rate of serious postoperative complications (11.6% vs. 3.5%, P = 0.03) and a longer duration of hospital stay (10.0 vs. 9.0 days, P < 0.001) compared to patients with SAS >7 (Table 2 ).
Moreover, a post hoc analysis using nonlinear restricted cubic splines in the regression model demonstrated the nonlinearity of SAS for secondary outcomes (Supplemental Figure 1 ). | Discussion
This secondary analysis involving 210 patients undergoing abdominal cancer surgery, revealed that frail patients had a lower SAS. Furthermore, patients with a SAS ≤ 7 exhibited a higher rate of postoperative severe complications and a longer duration of hospital stay compared to those with a SAS > 7.
Although frail patients had lower SAS, a significant difference was observed only for blood loss. Intraoperative blood loss caused by surgical trauma is difficult to control by an anesthesiologist. Although the total dose of cardiovascular agents and fluid volume were not statistically different between the two groups, heart rate and blood pressure were likely adjusted using cardiovascular agents. The exact mechanism between frailty and large blood loss volume remains unclear; however, frail patients exposed to a higher inflammatory status may have increased tissue vulnerability [ 13 ].
As expected, patients with lower SAS had worse postoperative outcomes. Some previous studies have adopted different cut-off values for postoperative risk stratification [ 7 , 14 – 16 ]. Although our study used a cut-off value of 7, a sensitivity analysis using a cut-off value of 6 based on the study by Gawande et al. [ 5 ] also confirmed the impact of SAS on postoperative outcomes (Supplemental Table 2 ). However, a post hoc analysis demonstrated the nonlinearity of SAS for secondary outcomes (Supplemental Figure 1 ). This suggested that converting continuous variables to categorical variables might not be required [ 17 ].
This study had several limitations. First, the use of different assessment tools for frailty may have affected the results. Various instruments for assessing frailty are currently available, and some instruments require measuring gait speed. In contrast, the Fried Frailty Phenotype Questionnaire used in this study is a measurement tool that assesses frailty only through a questionnaire survey. Second, frailty may increase with the progression of the cancer. However, since metastatic or recurrent cancers do not have a stage classification, we could not include the stage of the cancer in this analysis. Third, the generalizability of the findings is limited due to the study being conducted at a single center and including only patients undergoing elective surgery. Forth, we could not determine the causal relationship between frailty and lower SAS. Finally, univariate analysis was performed to assess the association between frailty and SAS; however, no previous study has evaluated factors associated with SAS. Future studies should investigate the factors associated with SAS that may contribute to worsening postoperative outcomes. | Conclusions
This study demonstrated that frail patients have lower SAS, and patients with lower SAS have higher postoperative complication rates and longer hospital stays in patients who underwent cancer surgery. | Introduction
The surgical Apgar score is useful for predicting postoperative morbidity and mortality. However, its applicability in frail patients with minimal hemodynamic variation remains unknown. This study aimed to investigate the association between frailty and surgical Apgar score.
Methods
This secondary analysis included 210 patients ≥ 65 years of age undergoing elective major abdominal surgery for cancer. Frailty was assessed using the Fried Frailty Phenotype Questionnaire and defined as a total score of ≥ 3. The surgical Apgar score (range, 0−10; including mean blood pressure, heart rate, and blood loss volume) was compared between patients with or without frailty using the Mann–Whitney U test. Postoperative severe complications and length of postoperative stay were compared between patients with surgical Apgar scores ≤ 7 and > 7.
Results
Among the included patients, 45 were classified as frail. The median [1st quartile, 3rd quartile] surgical Apgar scores in patients with and without frailty were 7.0 [7.0, 8.0] and 8.0 [7.0, 8.0], respectively (P = 0.03). Patients with surgical Apgar score ≤7 had a higher incidence of serious postoperative complications (P = 0.03) and longer hospital stays (P < 0.001) compared with patients with surgical Apgar score >7.
Conclusion
Frail patients have lower SAS, and patients with lower SAS have higher postoperative complication rates and longer hospital stays in patients who underwent cancer surgery.
Supplementary Information
The online version contains supplementary material available at 10.1186/s40981-024-00687-3.
Keywords | Supplementary Information
| Abbreviations
surgical Apgar score
Acknowledgements
None
Patient consent statement
We obtained patient consent by verbal explanation.
Permission to reproduce material from other sources
Not applicable
Authors’ contributions
SH: data collection. MI: study coordinator, study concept and design, interpretation of data, writing of manuscript. YK: data collection. MK: interpretation of data, and revision of manuscript. All authors: critical review of manuscript, approval of final version.
Funding
None
Availability of data and materials
The data pertaining to this study are available as a spreadsheet file upon reasonable request.
Declarations
Ethics approval and consent to participate
This study was approved by the Nara Medical University Institutional Review Board (Kashihara, Nara, Japan; Chairperson, Prof. M Yoshizumi; approval number: 2975; 28 April 2021).
Competing interests
The authors declare that they have no competing interests. | CC BY | no | 2024-01-15 23:41:54 | JA Clin Rep. 2024 Jan 13; 10:2 | oa_package/55/72/PMC10787715.tar.gz |
PMC10787724 | 38217751 | Introduction
Spinal fusion surgery has been a standard of care for lumbar degenerative diseases refractory to conservative treatment and can produce satisfactory clinical results [ 1 ]. However, lumbar arthrodesis may increase biomechanical stress on the levels neighboring fused segments, which could possibly cause early adjacent segment disease (ASD) [ 2 ]. Symptomatic ASD frequently results in deterioration of the clinical outcome and requirement of further surgical treatment.
With the goal of establishing potential preventive methods, numerous studies are carried out to investigate the risk factors for ASD. Recently, increasing attention has been paid to the role of spinopelvic sagittal malalignment in the development of ASD. Maintaining or restoring “normal sagittal alignment” is of paramount importance in the lumbar fusion surgery [ 3 ]. Although few studies have demonstrated that pelvic incidence (PI) minus lumbar lordosis (LL, PI − LL) < 10° is a useful predictor for ASD, this simple formula have limitations [ 4 ]. It is controversial where the idea range of PI − LL should lie, since thresholds varies with different populations [ 4 , 5 ]. Arbitrary use of an absolute numeric value for the evaluation of sagittal alignment may be misleading [ 6 ]. Emerging evidences have demonstrated that radiographic targets of surgery should be tailored to individual [ 4 , 7 ].
Previously, Roussouly et al. [ 8 ] defined four types of spinal shapes in healthy population based on sacral slope (SS) and the shape of lordosis. Then, they described the possible evolution of these “normal” types under degenerative conditions [ 9 ]. Subsequent studies demonstrated that restoring sagittal alignment to the original type can remarkably reduce complication rates after adult spinal deformity surgery [ 10 ]. Additionally, a few studies have evaluated the influence of different Roussouly sagittal profiles on the outcome of patients who received lumbar decompression or fusion surgery [ 11 , 12 ]. However, there is still no data proving the benefit of maintaining ideal Roussouly shape in the lumbar degenerative diseases and its association with the development of ASD. Thus, this study was performed to validate the usefulness of Roussouly classification to predict the occurrence of ASD after short-level lumbar fusion surgery. | Materials and methods
Patients
After the approval of Institutional Review Board, a retrospective review of one database comprising patients with lumbar degenerative diseases between January 2009 and January 2018 was performed. The patient enrollment criteria were as follows: (1) age between 40 and 80 years at the time of the index surgery, (2) treated with L4–5 or L3–5 fusion and screw fixation using the conventional posterior approach, and (3) had a follow-up duration of more than 5 years with a complete set of outcome measures and radiological examinations. The exclusion criteria were as follows: (1) had ASD observed at the caudal segment or at both cranial and caudal segments; (2) had a prior history of spinal surgery, trauma, tumor or infection; (3) the Cobb angle of lumbar curve exceeding 10° on the coronal plane; (4) diagnosed as acute or delayed deep surgical site infection after primary surgery; and (5) had a type 3 + anteverted pelvis (AP) sagittal shape.
Every patient was treated with laminectomy decompression, pedicle screw instrumentation, and fusion. Transforaminal lumbar interbody fusion (TLIF) procedures were generally performed at each level [ 1 ]. In a few patients with 2-level fusion, TLIF procedures were performed at one level. For another level with a less degenerated disc and no evidence of foraminal or central canal stenosis, posterolateral intertransverse process fusion was carried out instead of TLIF [ 13 ]. Standing posteroanterior and lateral radiographs were taken preoperatively and at each follow-up visit. The computed tomography (CT) scans and the magnetic resonance imaging (MRI) were performed before surgery. In addition, the MRI and flexion (F)–extension (E) lateral radiographs were obtained at the latest follow-up.
Radiographic evaluation
Preoperative disc degeneration of cranial adjacent segment on MRI and facet joint degeneration of cranial adjacent segment on CT were evaluated according to the previous proposed criteria [ 14 , 15 ]. The intervertebral disk height of cranial adjacent segment was measured on neutral lateral radiographs [ 16 ]. The following spinopelvic parameters were collected before surgery and at 3-month follow-up: (1) PI; (2) SS; (3) pelvic tilt (PT) (4) LL: the angle subtended by the superior end plate line of L1 and S1; (5) distal lordosis (DL): the angle between the upper endplate of L4 and S1; (6) sagittal vertical axis (SVA): the perpendicular distance between the C7 plumb line and posterior–superior endplate of the S1; (7) lordosis distribution index (LDI): the percentage of DL contribution to the LL; and (8) segmental lordosis (SL): the lordosis between the upper instrumented vertebra and the lower instrumented vertebra.
Based on the previous work of Pizones et al. [ 17 , 18 ], patients were classified by both “theoretical” and “current” Roussouly types. The “theoretical” classification relied on PI to divide patients into four types: type 1 and 2 corresponded to PI < 45o, type 3 to PI between 45o and 60o, and type 4 to PI > 60o [ 19 ]. This classification provided the ideal sagittal profile for each patient: the idea SS, lumbar apex, inflexion point, and number of vertebrae in lordosis (NVL) [ 18 ]. Then, the “current” types were evaluated using the previous proposed criteria: type 1 and 2 corresponded to SS < 35o, type 3 to SS between 35o and 45o, and type 4 to SS > 45o [ 19 ]. The lumbar apex, inflexion point, NVL, and sagittal shape were also recorded. These parameters were especially important to differentiate type 1 and type 2 shapes, as PI and SS values were shared by them [ 9 , 19 ]. According to the above parameters, the patients were classified as “matched” if their postoperative “current” shape matched the “theoretical” type and otherwise as “mismatched”.
In the current study, all radiographic parameters were measured twice at an interval of 1 week by a well-trained observer, and the mean of both measurements was used for subsequent analysis. The values of intraobserver reproducibility were calculated and quantified by the intraclass correlation coefficient (ICC) for all measurements. There were strong intraobserver agreements for all parameters, as all ICCs exceeded 0.8.
ASD definition
The diagnosis of radiological degeneration was made when radiographs and MRI showed one or more of the following pathologies at a cranial segment firstly adjacent to fusion that were not present preoperatively: (1) narrowing of disc height of > 10% or development of slippage > 3 mm on a upright lateral radiograph [ 3 , 20 , 21 ], (2) a sagittal translation of more than 3 mm or intervertebral angle change of more than 10° on F–E modality [ 22 , 23 ], or (3) advancement in disc degeneration, disc herniation or spinal canal stenosis evaluated by MRI [ 21 , 24 ]. ASD was defined as newly developed or aggravated radiological degeneration adjacent to the fused levels caused recurrent clinical symptoms, such as low back and leg pain, numbness, or intermittent claudication during the follow-up period [ 21 , 23 , 25 ].
Statistical analyses
Statistical analyses were performed using SPSS version 25.0 (IBM Corp., Armonk, NY). The unpaired t -test was used to determine the differences in the continuous data between ASD and non-ASD groups. A chi-square test or Fisher’s exact test, depending on the number of subjects involved, was used for categorical data analysis. A p value of less than 0.05 was considered statistically significant. Variables with p < 0.1 in the univariate analysis were included in the multivariate analysis with a forward stepwise method to evaluate adjusted associations between potential variables and ASD development.
The relationships between postoperative spinopelvic parameters and age, as well as PI, were analyzed using the Pearson or Spearman correlation analysis, and simple linear regressions were simultaneously conducted. In a subanalysis, patients were stratified by both “theoretical” and “current” Roussouly types. A one-way analysis of variance (ANOVA) test was used to evaluate differences in the spinopelvic parameters among types. | Results
Patients
A total of 234 consecutive patients were enrolled in this study. The average age at the index surgery was 60.1 years (range, 41–78 years). The fusion level was L4–5 in 118 and L3–5 in 116 cases, respectively. With a mean follow-up duration of 70.6 months (range, 60–121 months), evidence of ASD was found in the 68 cases. The pathologies of radiological degeneration included progression of retrolisthesis in 28 patients, spinal stenosis in 24 patients, and aggravation of disc herniation in 16 patients. To date, 31 patients had received revision surgery due to the ASD, while the rest were relieved by conservative treatment.
As shown in the Table 1 , the characteristics of the ASD and non-ASD groups did not differ statistically in terms of sex, Pfirrmann grade, facet grade, disc height, body mass index (BMI), and follow-up duration, but the age at the index surgery in the ASD group was significantly higher than that in the non-ASD group ( p < 0.001). Meanwhile, the differences in the fusion level and etiology between groups were statistically significant (all p < 0.05). Regrading medical comorbidities, the difference was only detected in the osteoporosis ( p = 0.043).
Comparison of spinopelvic alignment between groups
There were significant differences in the preoperative LL and SVA between the ASD and non-ASD groups (all p < 0.05). Postoperatively, PI, SS, LL, and DL in the ASD group were lower than those in the non-ASD group (all p < 0.05; Table 2 ). The distribution of “theoretical” types was similar between the ASD and non-ASD groups, but there were more “current” shapes classified as type 1 or 2 and fewer as type 3 in the ASD group when compared with non-ASD group ( p < 0.001). Moreover, 80.9% (55/68) of the patients who suffered ASD after surgery were mismatched, while 48.2% (80/166) of the patients without ASD had mismatched type ( p < 0.001).
Pearson or Spearman correlation tests showed that age was only correlated to SVA ( r = 0.192; p = 0.003). PI was correlated to PT ( r s = 0.612; p < 0.001), SS ( r = 0.727; p < 0.001), LL ( r = 0.479; p < 0.001), SL ( r s = 0.395; p < 0.001), LDI ( r s = −0.300; p < 0.001), PI − LL ( r s = 0.418; p < 0.001), and SVA ( r = 0.160; p = 0.014) but not DL ( r s = 0.098; p = 0.133). Linear regression analysis (Fig. 1 ) found a linear correlation between PI and lumbar sagittal parameters (LDI = −0.4891*PI + 89.31, R 2 = 0.086, p < 0.001; PI − LL = 0.4762*PI-20.81, R 2 = 0.198, p < 0.001).
Risk factors of ASD
Age; sex; fusion level; etiology; osteoporosis; and postoperative PI, SS, LL, DL, and Roussouly type match were included in the multivariate analysis. The model finally chose four independent risk factors: age (OR = 1.058, 95% CI 1.013–1.105; p = 0.012), 2-level fusion (OR = 2.983, 95% CI 1.349–6.597; p = 0.007), postoperative DL (OR = 0.949, 95% CI 0.911–0.989; p = 0.014), and postoperative mismatched Roussouly type (OR = 4.629, 95% CI 2.239–9.570; p < 0.001). When patients were stratified by “theoretical” types, those who had a mismatched type were more predisposed to the occurrence of ASD than those who were matched to their ideal shape in all four types, and statistical differences were found in the type 2, 3, and 4 (Fig. 2 ).
Subanalysis by Roussouly type
When considering the "theoretical" type, the differences in age and fusion level among the groups were not statistically significant. However, there were significant differences among the four theoretical types in terms of all spinopelvic parameters, except for SVA. Type 2 exhibited significantly lower values for LL, DL, and SL compared with types 1, 3, and 4 (Table 3 ). When considering the "current" types, the percentage of 2-level fusion in type 1 and 2 was significantly higher compared with type 3 and 4 ( p < 0.001). Furthermore, type 2 exhibited the highest PT and the lowest values for LL, DL, and SL among the four groups. The LDI of type 2, 3, and 4 became similar and significantly lower than that of type 1. ( p < 0.001; Table 4 ). | Discussion
Although the importance of spinopelvic alignment and its correlation with ASD have been validated in many studies, the “normal” alignment remains poorly defined. Previous studies have investigated the relationship between PI − LL mismatch and the occurrence of ASD. In a biomechanical study with musculoskeletal modeling, Senteler et al. [ 5 ] concluded that PI − LL ≥ 15° was a predictor of revision surgery for ASD. Rothenfluh et al. [ 26 ] showed that after receiving lumbar posterolateral fusion, patients with PI − LL ≥ 10° had a tenfold greater risk of developing ASD than controls. However, a 10-year follow-up study by Toivonen et al. [ 6 ] demonstrated postoperative PI − LL > 9° did not result in a significantly increased risk of revision for ASD. Our study also did not find a statistically significant effect of PI − LL on the rate of ASD. The patients with low PI were likely to be PI − LL match, while patients with high PI tended to be classified as mismatch. Hence, reaching the simplistic target of PI − LL match does not always prevent the occurrence of ASD. Subsequent studies proposed that sagittal realignment should take the entirety of age-related dynamic generative changes into account and determined new age-specific values for sagittal parameters, such as age-adjusted PI − LL [ 7 ]. In the current study, age was also recognized as an independent risk factor of ASD. However, our results showed that age only correlated with and SVA. Thus, it is still controversial whether age-specific sagittal parameters could be used in the assessment of ASD.
With regards to sagittal alignment, postoperative DL and mismatched Roussouly type were risk factors of ASD. Degenerative diseases frequently involve lower lumbar spine and lead to the loss of DL and anterior displacement of the axis of gravity [ 27 ]. Then, pelvic retroversion and upper lumbar hyperlordosis are recruited to keep sagittal balance [ 28 ]. Our result showed that compared with theoretical types, there was an increasing incidence of type 1 and 2 shapes in the current types, because high PI types (type 3 and 4) could evolute into retroverted types through pelvis retroversion [ 10 ]. Hyperextension of adjacent segments is another common local compensatory mechanism to limit the consequences of lumbar kyphosis on the shift of axis gravity [ 29 ]. Cranial adjacent segments are more extended to place the upper lumbar spine posteriorly for avoiding forward trunk. Due to pelvis retroversion and altered lordosis distribution, the lumbar sagittal shape and the location of lumbar apex may change, finally resulting in the degenerative evolution of original Roussouly type. If DL cannot be restored after fusion surgery, PT remains impaired and proximal lumbar levels continue to signify more extension for maintaining sagittal balance. This compensatory mechanism generates increase of stresses on posterior structures, exposes adjacent segment to the risk of retrolisthesis, and may result in accelerated degeneration [ 29 ]. Therefore, if the spinopelvic morphology is not paralleled with a corresponding ideal type, the patients will be predisposed to a greater risk for ASD (Figs. 3 and 4 ).
Recently, the role of DL in spinal biomechanics was noted and LDI was used to evaluate the risk of ASD development. Bari et al. [ 30 ] reported that in the patients received lumbar fusion surgery, hypolordotic lordosis maldistribution was associated with increased risk of revision surgery. Zheng et al. [ 27 ] also found patients with low LDI were at greater risk for developing ASD than those with high LDI after L4–S1 fusion for degenerative disease. However, it is not appropriate to define the range of 50–80% as optimal cutoff points of LDI, because there is a linear and negative correlation between PI and LDI. As shown in the previous studies, proximal levels were recruited to increase total lordosis as the PI values increased, but L4–S1 lordosis was nearly constant (approximately 35°) and independent of the PI [ 30 , 31 ]. Our results also showed that PI was not correlated with DL, indicating that different PI values may share the same target of DL reconstruction. Additionally, due to a lower PI value in the ASD group compared with the non-ASD group, the presence of worse LDI suggested that the ASD group did not receive the optimal restoration of DL.
When stratified by theoretical types, the incidence of ASD was highest in the type 2. Subanalysis showed that both DL and LDI of all four theoretical types were worse than their ideal values. In addition, theoretical type 2 had the lowest DL and its value of LDI was comparable with that of theoretical type 3. This result may help explain why type 2 had the highest incidence of ASD. When it comes to current types, patients with theoretical type 3 or 4 who underwent 2-level fusion were more likely to evolve into current type 1 or 2, suggesting hypolordotic fusion was more common in the 2-level fusion. Similarly, DL of current type 2 was lowest among groups. The PT and LDI of current type 2 became even worse than those of current type 3, as high PI types did not receive optimal reconstruction of DL and converted into retroverted types [ 9 ]. Duan et al. [ 12 ] also reported that preoperative PT in current type 2 was higher than that of current type 3, and a decrease of PT was observed in type 2 after surgery. They concluded that pelvic retroversion was the main type of compensation in the current type 1 and 2. However, we should be aware that current patients with type 2 were composed of patients with both low PI and high PI. The capacity of pelvis retroversion is limited in the patients with low PI and hyperextension of adjacent segments may be the main compensatory mechanism [ 29 , 32 ]. Different from SS, PI is a fixed value for any given individual and will not be modified by degenerative changes or spinal arthrodesis [ 33 ]. According to PI value, we can better speculate that which ideal sagittal profile the patient belongs to and set surgical goals [ 9 , 19 ].
Limitations
This study had several limitations. First, there was a lack of consideration of other possible factors that are associated with ASD. Due to incomplete data, some factors like paraspinal muscle atrophy and bone mineral density were not included. Second, the strength of our results was limited by a not-big-enough series. Concerning low ratio of some types, such as theoretical type 1 and 2, it was difficult to generalize with the limited patients. Additionally, type 3 AP was not involved, as only six patients who met inclusion criteria were identified as this type. More data are needed to draw the powerful conclusion. Finally, ASD is a time-dependent phenomenon. There remains a possibility that part of the non-ASD can evolve into ASD over time. Thus, a long-term follow-up study should be conducted to reduce the bias. | Conclusion
In summary, loss of DL and mismatched Roussouly type were significant risk factors affecting the occurrence of ASD after short-level fusion surgery for lumbar degenerative diseases. In pathologic patients, PI is a reliable index for classifying sagittal types, rather than SS. To decrease the incidence of ASD, it is important to achieve an appropriate value and distribution of DL that restores sagittal alignment back to the ideal Roussouly type. | Background
Recent studies demonstrated that restoring sagittal alignment to the original Roussouly type can remarkably reduce complication rates after adult spinal deformity surgery. However, there is still no data proving the benefit of maintaining ideal Roussouly shape in the lumbar degenerative diseases and its association with the development of adjacent segment disease (ASD). Thus, this study was performed to validate the usefulness of Roussouly classification to predict the occurrence of ASD after lumbar fusion surgery.
Materials and Methods
This study retrospectively reviewed 234 consecutive patients with lumbar degenerative diseases who underwent 1- or 2-level fusion surgery. Demographic and radiographic data were compared between ASD and non-ASD groups. The patients were classified by both “theoretical” [based on pelvic incidence (PI)] and “current” (based on sacral slope) Roussouly types. The patients were defined as “matched” if their “current” shapes matched the “theoretical” types and otherwise as “mismatched”. The logistic regression analysis was performed to identify the factors associated with ASD. Finally, clinical data and spinopelvic parameters of “theoretical” and “current” types were compared.
Results
With a mean follow-up duration of 70.6 months, evidence of ASD was found in the 68 cases. Postoperatively, ASD group had more “current” shapes classified as type 1 or 2 and fewer as type 3 than the non-ASD group ( p < 0.001), but the distribution of “theoretical” types was similar between groups. Moreover, 80.9% (55/68) of patients with ASD were mismatched, while 48.2% (80/166) of patients without ASD were mismatched ( p < 0.001). A multivariate analysis identified age [odds ratio (OR) = 1.058)], 2-level fusion (OR = 2.9830), postoperative distal lordosis (DL, OR = 0.949) and mismatched Roussouly type (OR = 4.629) as independent risk factors of ASD. Among the four "theoretical" types, type 2 had the lowest lumbar lordosis, DL, and segmental lordosis. When considering the "current" types, current type 2 was associated with higher rates of 2-level fusion, worse DL, and greater pelvic tilt compared with other current types.
Conclusions
DL loss and mismatched Roussouly type were significant risk factors of ASD. To decrease the incidence of ASD, an appropriate value of DL should be achieved to restore sagittal alignment back to the ideal Roussouly type.
Level of Evidence : Level 4.
Keywords | Abbreviations
Adjacent segment disease
Body mass index
Pelvic incidence
Pelvic tilt
Sacral slope
Lumbar lordosis
Distal lordosis
Segmental lordosis
Lordosis distribution index
Sagittal vertical axis
Number of vertebrae in lordosis
Transforaminal lumbar interbody fusion
Computed tomography
Magnetic resonance imaging
Odds ratio
Confidence interval
Intraclass correlation coefficient
One-way analysis of variance
Acknowledgements
Not applicable
Author contributions
All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by M.W., X.W., and H.W. The first draft of the manuscript was written by M.W., and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Funding
This work was supported by the National Natural Science Foundation of China (grant no. 82160555) and Natural Science Foundation of Xinjiang Uygur Autonomous Region (grant no. 2022D01A317).
Availability of data and materials
The datasets generated and analysed during the current study are not publicly available due to the sensitivity of the data and concerns regarding privacy protection.
Declarations
Ethics approval and consent to participate
This study was performed in line with the principles of the Declaration of Helsinki. This study was approved by the ethics committee of Affiliated Changzhou Second People’s Hospital of Nanjing Medical University and Affiliated Drum Tower Hospital of Nanjing University Medical School [2019–029-01]. All methods were carried out in accordance with relevant guidelines and regulations. Informed consent was obtained from all individual participants included in the study.
Consent for publication
The authors affirm that human research participants provided informed consent for publication of the images in Figs. 3 and 4 .
Competing interests
The authors declare that they have no competing interests. | CC BY | no | 2024-01-15 23:41:54 | J Orthop Traumatol. 2024 Dec 13; 25:2 | oa_package/ad/cd/PMC10787724.tar.gz |
|
PMC10787732 | 38218867 | Introduction
Potato is the third most important commodity in the world and represents an essential energy source for human consumption. Potato tubers are processed to provide food, starch, crisps, food additives, and beverages and used for some pharmaceutical products 1 . There is a demand for high-quality tubers that fulfill standards in appearance, size, shape, and flesh or skin color 2 . Regardless of the cultivar, it is essential to guarantee undamaged, appealing, and healthy tubers. Yet these characteristics are challenging to obtain in the context of climate change 3 . Most tuber disorders result from the interaction between environmental conditions, cultivation systems, storage, harvest, or transportation. During growth and handling operations, tubers can sustain various mechanical damage. Likewise, numerous bio-aggressors degrade potato quality and represent a critical threat to their marketability 4 , 5 .
Such is the case of the common scab (CS) bacterial disease, one of the most important blemish diseases caused by a pathosystem of soil-borne, gram-positive bacteria of the genus Streptomyces . The symptoms appear as superficial scab lesions or deep-pitted lesions, downgrading the harvest and resulting in significant economic losses for the growers. Only a few of the several hundred described species are known to be pathogenic to the crop. According to Braun et al. 6 , the two most abundant common scab-causing bacteria in Europe are Streptomyces turgidiscabies and S. europaeiscabiei . The resistance mechanism to CS is not yet well defined and is still under study. Potato breeders attempt to mitigate the disease spread by developing resistant genotypes that can satisfy both the field and market requirements 7 . Different potato varieties have been recognized to have high levels of resistance to CS under field screenings. Quality assessments of tubers as well as disease severity are generally conducted by visual scorings or manual measurements 8 , 9 . Although these methods have provided valuable information for selecting desirable genotypes, they are imprecise, time-consuming, and subjective.
On the other hand, digital image processing has improved the consistency and accuracy of plant traits assessment by diminishing the variability caused by human bias 10 . Previous reported studies have evaluated tuber shape, size, and color 11 – 14 , with accuracies ranging from 70 to 94% compared with caliper measurements and human scorings. Although the results show high accuracies, some challenges remain, such as the lack of user-friendly tools, automation, or adaptation to low-cost and high-throughput phenotyping 15 .
Similarly, some approaches have been reported to assess potato tuber defects 16 , 17 . Samantha et al. 18 proposed a method to detect CS based on image analysis in the RGB (red, green, and blue) color space. However, the method uses a series of filters and an unsupervised classifier that is very sensitive to changes in image acquisition conditions 19 . In the range of infrared wavelengths, it has been shown that it is possible to discriminate between infected and asymptomatic areas. Despite the high correlation with the standard severity measurements, the equipment to measure diffuse reflectance is costly and requires seasoned staff to perform acquisitions. Dacal-Nieto et al. 20 presented a non-destructive approach using hyperspectral imaging combined with supervised classifiers to identify areas affected by CS. The results showed an accuracy of 97.1%, clearly distinguishing the severity levels. However, the method requires a special system to acquire the images that lack operability to be implemented in a breeding context.
In the past decade, deep learning (DL) techniques, especially based on Convolutional Neural Networks (CNNs), have become state-of-the-art approaches in pattern recognition, including plant disease detection and scoring 21 , 22 . CNNs generate visual representations hierarchy, which is enhanced for a particular task, especially for image recognition and classification that has proved to yield accurate and robust models 22 . They require a training set to calibrate a model with a set of biases and weights corresponding to the target that it was designed for. Among their advantages is that CNNs can process new data and identify significant features with minimal human supervision and tuning. In the case of potato tubers, Oppenheim et al. 4 proposed a method based on CNN to identify tuber diseases from patches of grayscale images, achieving discrimination accuracies of over 90%. However, the sole identification of diseases is not sufficient to select varieties. An additional scoring of severity levels provides a finer insight into their relative resistance. Thus, developing a robust, user-friendly, and automated imaging method to assess CS infections and tuber morphology is highly valued.
Therefore, this study answers three objectives. The first is to evaluate tuber morphology traits of potato tubers, providing insight into tuber quality for the market. The second is to detect and quantify the level of severity of CS using CNNs. The third is to develop a fully automated and user-friendly application combining the two previous objectives. | Materials and methods
Plant material
Samples of potato tuber were collected from two sources: (1) from Graminor’s core collection grown in field experiments at Ridabu, Norway, from 2019 to 2022, and (2) from a greenhouse inoculation experiment in which 840 interrelated potato lines were planted in sterile peat soil infected with a mixture of three S. europaescabiei strains from the NIBIO collection of plant pathogens (isolate nos. 08-12-01-1, 08-74-04-1, 09-185-2-1). In total, 7200 tubers of yellow and red genotypes were used. The core collection tubers represented different levels of infections naturally occurring in the field. Figure 1 shows four samples of tuber containing the full possible range of infections, from completely healthy to maximum severity.
Image acquisition
Tubers were washed, dried, and manually placed in groups of six onto a fiber blue background, along with a 5 cm ruler and a color scale palette for further analysis. Images were captured with a Canon PowerShot G9 X Mark II camera with a lens of 10.2–30.6 mm, 1:2.0–4.9, and a resolution of 20.1 megapixels. The camera was mounted on a Hama photo stand at a top-view angle of 40 cm from the target. The target was uniformly illuminated by daylight bulbs of 85W-5500 K. Camera settings were selected based on the best view of the tubers, ISO 1250, F-stop 1/125, exposure time 1/11, and focal length 10.2 mm. Digital images were stored in JPEG format with a pixel resolution of 7864 × 3648. The size of the tubers varies from 172 to 256 pixels in length. Figure 2 shows an illustration of the image acquisition protocol.
Database
The database contains 1100 images with 7154 yellow and red tubers. The tubers were categorized into five severity classes, with class 1 being healthy and classes from 2 to 5 that represent increasing severity levels of infection. The classes were attributed based on the area percentage of lesions on the potato tuber skin. A first approximation of the percentage of infected area was obtained in a semi-automated way, using the machine-learning tool Trainable Weka Segmentation (TWS) 23 as a plugin for the software ImageJ 24 . Manual annotations of 200 images containing 1000 tubers were taken to train a random forest model 25 . The data was segmented into four classes (background, red tuber, yellow tuber, and scab). The segmentation was then taken to a pixel-wise classification where each pixel was classified as belonging to one of the four classes. This first quick approximation with color analysis was then corrected and validated manually.
Image and data processing
All the image processing was conducted using Python language 26 with the package OpenCV (Open Source Computer Vision Library) 27 for image manipulation and analysis of tuber morphology, and TensorFlow 28 for the deep learning section; the algorithms developed were automated in a GUI (graphical user interface) that could be run over a single image or a large group of images as a batch.
GUI
The GUI, hereafter called ScabyNet (Fig. 3 ), was developed in Python using the package Tkinter 29 and customTkinter 30 . The GUI is user-friendly and contains two main modules and a tab designed as a home window. Modules 1 and 2, corresponding respectively to the estimation of tuber morphology traits and area lesions by CS.
Home
The home module contains information about the functionality of ScabyNet, where the user receives instructions on how to use the application.
Module 1: morphology features
The morphology module is a fully embedded data processing pipeline that estimates potato tuber morphology characteristics from color images. The module measures for each tuber, length, width, area, length-to-width ratio, circularity, and color values distinguishing between red and yellow tubers. The color analysis is performed in the L*a*b* color space: lightness, a*, and b* chromaticity values for respectively the green–red and yellow-blue axes 31 .
The steps in the processing chain of this module are presented in Fig. 4 and described in more detail in the following subsections and the flowchart in Fig. 5 .
Resizing and color segmentation
To remove the background, facilitate object identification, and decrease computation time, the image size was reduced from 4864 × 3648 to 1459 × 1094 pixels (one-third, conserving proportions). Then, a color conversion was applied from the RGB to the L*a*b* color space, using the features of the OpenCV package. This color representation was chosen because it was designed to approximate human psychovisual representation. A binary filter was applied to remove undesired objects. Each channel of the image was subjected to an examination to determine the adapted threshold. Subsequently, the resulting binary image was used as a mask for the original one.
Morphological operation: opening
Due to variations in lighting intensities, drops of shadows, and reflections, some objects in the image contained gaps. They were corrected with the flood-fill algorithm 32 , ensuring object integrity in the image. However, despite this correction of the gaps, some artifacts remained on the image. To discard them, a morphological opening operation was applied 33 . The opening consists of removing pixels on the object boundaries (erosion), then adding pixels to the new boundaries (dilation) on the resulting image. In both cases, the same 5 × 5-pixel square kernel was used as a structuring element. This structuring element identifies the pixel to be processed and defines the neighborhood of connected components based on this binary information. As a result of the opening, the small objects were removed from the image while the shape and size of the tubers were preserved.
Identifying connected blob components
Once the segmentation and color reduction was applied, the next step was to identify the tubers. In some cases, tubers were found to be placed too close to each other in the image. Thus, they were detected as a single component. A distinction between connected and disconnected components was performed based on convexity criteria to solve this issue. The operation works in two steps, finding contours, then computing their convex hull 34 . Convex objects i.e. individual tubers were copied and kept apart (noted image A) meanwhile objects corresponding to connected blobs, i.e., joint tubers (noted image B) were submitted separately to a segmentation process to split the connected blobs into the correct individual tubers.
Segmenting with watershed transformation
The image containing only the connected blob components (image B), was processed with watershed transformation to split the blobs into individual tubers and obtain the correct tuber count and morphology. The watershed transformation is based on topographic distances. It identifies the center of each element in the image using erosion, and from this point to the edges of the object, it estimates a distance map. Then, this area “topological map” is filled according to the gradient direction, as if it were filled with water. In this way, all connected components are separated (noted image C) 35 , 36 . Subsequently, image c was combined with image A to gather all the identified individual tubers in only one image (D).
Filtering by size and circularity
Once the objects were whole, the tubers were isolated from the non-targeted objects (ruler, color scale palette, genotype serial tag, etc.). For this purpose, a filter was first applied according to the object’s area and then according to the circularity based on Eq. ( 1 ) given by Wayne Rasband 24 . After inspecting the area of tubers and their circularity, the min and max values were determined. Only objects of an area between 11,000 px 2 and 104,000 px 2 and of circularity higher than 0.7 were retained.
Estimating morphology features
Tubers were identified and labeled with an ID, then for each one, the following parameters were measured: area, perimeter, length, width, length-to-width ratio, and circularity. Afterward, to provide a visual representation of the processed input image, the original image was masked with the results from the size and circularity filter, leaving only the tubers. The following formula was used to calculate circularity:
Identifying tuber skin color
The tubers contained a complex color spectrum corresponding to variations of the skin, buds, lenticels, mechanical damage, common scab symptoms, and other possible defects. To overcome this problem, color identification was performed using a K-means color quantization 37 , aiming both at facilitating the identification and reducing computation time. The process consists of reducing the number of colors in an image from 256 × 256 × 256 possible values in the 8-bit RGB color model to the desired number of colors but preserving the important information of it. In this case, three colors were selected, (background and the two considered colors for tubers). Based on these values (clusters), the centroids were determined. Then the color was determined, according to the minimum Euclidean distance between all the respective colors present in the image to the three cluster centroids. Several repetitions were performed until the centroid of clusters did not show changes and the distance between the centroids and the color objects was minimal while the distance between centroids was maximal. Subsequently, the image was segmented into three colors and an 8-bit value was given to the respective object, ‘0’ to the background, ‘1’ to the red tuber, and ‘2’ to the yellow tuber.
Displaying results
When analyzing an individual image the results are directly displayed on the screen. A window with the image containing only the previously labeled potato tubers, and another window with a table containing the estimations of morphology and color features. On the other hand, when selecting a batch of images, the results are saved in a folder named ‘Results’ in the same source directory given by the user. The folder contains the processed images with the potato tubers labeled and a CSV file with all the measurements linked to the respective IDs.
Module 2: common scab detection
Deep learning
The deep-learning module processes individual tiles of fixed size (172 × 172 pixels) representing an individual tuber. The tiles contain the segmented tubers without background, resulting from the morphology module's output.
Convolutional neural network architecture
A benchmark including six common architectures of CNN was conducted to model and predict the severity level of scab infection: VGG16, VGG19, ResNet50V2, ResNet101V2, InceptionV3, and Xception. These architectures were developed for different object recognition applications, including plant and disease classification, and ranked among the best performing in the deep-learning challenges 38 . A table comparing their characteristics is described in Table 1 .
Different training strategies were compared and the training parameters were optimized according to the following criteria: minimizing the false positive rate of the infected classes in the health class and maximizing the separability between the minor and severe infection classes. The compared strategies were transfer learning and fine-tuning (Table 2 ). For both strategies, the networks are initialized with the weights resulting from the training on the ImageNet dataset containing 1.2 million images in 1000 classes such as “cat”, “dog”, “person”, and “tree”, among others 39 . In addition, we evaluated the robustness of the model with standard metrics (loss and accuracy). A schematic overview of ScabyNet-module 2 is shown in Fig. 6 .
Generally, the complete training of a CNN is computationally intensive and requires a substantial amount of annotated data. These data are usually gathered from multiple collaborative projects. In the case of new applications, where less data is available, it is common to use a pre-trained network from public databases and adapt them to the specific application.
Visual inspections and manual measurements
Manual measurements for morphology traits
Tubers were measured manually using the ImageJ software 24 , using the 5 cm ruler placed at the bottom of the images as a scaling reference. Each potato was selected, and using the option “line” from the toolbox, the length and width were measured, then using these two parameters the length-to-width ratio was calculated.
Expert scores for disease severity of CS
The severity levels of CS are usually assessed visually and scored by an expert evaluating two parameters. First, the surface area covered with scab lesions, and second, the severity level, i.e., how deep the scab lesion is observed. The surface area covered is rated on a scale from 0 to 9, where 0 corresponds to no scab lesions on the surface, and 9 corresponds to about 100% of the surface area covered with lesions. The depth of the scab lesion is rated on a scale from 1 to 3, where 1 = superficial lesions, 2 = raised lesions, and 3 = deep lesions, the most severe coverage. Here, only the surface area was used and the expert scoring in ten grades (or severity classes) was transformed into a five classes severity scale.
Classes for CS
Potato tuber images were visually selected and classified in five classes, depending on the level of severity of CS on the surface area. Class 1 corresponds to 0–9%, class 2 to 10–24%, class 3 to 25–50, class 4 51–74%, and class 5 to 75%–100%. Figure 7 , shows the scoring scale with corresponding images.
Statistical analyses
Statistical analyses were performed using R version 4.1 46 and Python version 3.9 47 . To evaluate module 1, the Pearson correlation coefficient was computed between the ground truth (manual measurements of the tubers), and the results obtained respectively with ImageJ and ScabyNet. For module 2, the two training strategies fine-tuning and transfer learning were compared. To ensure the reliability of the benchmark, the dataset (7154 potato tubers) was split into training, validation, and testing sets. By employing the function “Random Split” from Scikit Learn 48 , the main dataset was fractionated into 70% for the training set and the remaining 30% as a testing set. Subsequently, the training set was divided again using the same function to perform cross-validation, into 70% for the training set and the remaining 30% as a validation set. The results were compared with expert scoring in order to verify the accuracy of module 2.
Research involving plants
All the methods employed regarding plant materials followed the strict rules of the Swedish Agricultural University which are in accordance with all international standards, including those in the policies of Nature. | Results
ScabyNet is a user-friendly application that contains two main modules and a home tab dedicated to providing information on how to use the application. Modules 1 and 2 were designed to process images for morphology traits and CS severity. In both cases, an individual image containing any number of potato tubers or a batch of images could be analyzed. For the first case (individual image), the user selects the image file and after the analysis, the resulting image is displayed on the screen with the morphological features in a separate table. Then, the user decides whether to save the results or not. In the second case (a batch of images), the user selects the source folder containing the images to be analyzed, and in the same folder, a subfolder named “Results” is automatically created in the root of the data, and corresponds to the storage of the resulting processed images and the CSV file with the data information.
Module 1: morphologic features
Performance test
To assess the consistency of the morphologic features analysis, a dataset of 100 randomly selected images containing different numbers, shapes, and sizes of potato tubers was analyzed. In total 4735 tubers were processed.
Tuber size
The results obtained by ScabyNet were compared with ground truth data and a method proposed for ImageJ 24 . A medium–high correlation was observed with the ground truth and ScabyNet (> 0.84; Table 3 ). For the case of correlation between ScabyNet and ImageJ, the results show a high correlation (> 0.88; Table 3 ). Hence, ScabyNet provides a robust and reliable approach to evaluate tuber size features like the ones here described. Figure 8 shows the frequency distribution of all the morphological traits measured with this module. All the traits showed an almost symmetrical Gaussian distribution, except for the circularity that showed a skewed left.
Time efficiency
Images were processed in a computer with an Intel(R) Core(TM) i7-8650U CPU processor at 1.90 GHz 2.11 GHz. Time was recorded for all the steps required to analyze an image, starting with image acquisition and ending with saving the data. A complete analysis is described in the following subsections.
Image acquisition: Establishing the image acquisition protocol Organizing the shooting place with the illumination panels, setting up image parameters, and placing the camera in the stand at 40 cm took 10 min. This step is done only once during the analysis. Then, placing the previously washed tubers on the background took 2 min in batches of six tubers. The time for capturing the image took less than 5 s. The time taken for cleaning the potatoes was not accounted for because it was already required before performing visual inspections of the tubers. Image processing : Executing ScabyNet GUI
Estimating the time approximately that a user took to analyze an individual image with 6 potato tubers was around 4 s. The time of selecting the image file depends on the accessibility of the file. A more detailed inspection was performed with images containing different numbers of potato tubers (Table 4 A). The results showed that the analysis of one image containing up to 12 potato tubers lasts between 1 to 3.5 s. For a batch, the time varies depending on the number of images to analyze. A time recording was performed with a dataset of 100 images with 4735 potato tubers (Table 4 B).
Module 2: scab detection using deep-learning
The dataset composed of 7154 individual tuber tiles, from both red and yellow potato varieties was randomly divided into two main subsets. A learning set, composed of 70% of the data and used to calibrate and optimize models, and a test set, containing the remaining 30%, is used to assess the performance of the models on independent data.
During the training phase, a cross-validation is performed for which the learning set was itself divided into a training set, which constituted 70% of the learning set data, and a validation set with the remaining 30% of data.
Training steps
Figure 9 represents respectively the training accuracy (A) and the training loss (B). Figure 10 represents the validation accuracy (A) and the validation loss (B).
The models trained with the transfer learning strategy are denoted with “_tl” and displayed as dashed curves, while the ones trained with fine-tuning are denoted with “_ft” and displayed as continuous curves. The different models were trained according to the parameters presented in Table 5 , respectively to the architecture types and the training strategy.
All architectures showed a typical learning behavior, with increasing accuracy combined with a progressive decreasing loss at each epoch. Generally, the fine-tuning strategy presents significantly better performances than the transfer learning strategy for the deeper and more sophisticated architectures (ResNet50V2 and ResNet101V2, InceptionV3, and Xception), both in the training and validation. On the other hand, the simpler VGG networks (VGG16 and VGG19) showed better performances in fine-tuning (Fig. 9 ). With the transfer learning strategy, the ResNet architectures cannot be trained for the CS application. Their respective training accuracy barely exceeded 50% after 15 epochs, and the training loss did not decrease from 7 epochs. This means the produced model was equivalent to random decisions and did not include any new information. The validation performances showed the same behavior and confirmed that the training of these architectures in transfer learning failed with the available data. Similarly, InceptionV3 and Xception also showed poor transfer learning performance, with maximum accuracies of just over 60%. The two models quickly stagnated after a few iterations, and their weights did not exhibit any significant change after 6 epochs. The validation’s respective accuracy and loss showed the same poor performances. For VGG16 and VGG19, the training accuracy reached over 80% accuracy after 10 epochs and reached its maximum at 14 epochs with 85% and 86% accuracy, respectively. However, when looking at the validation results, performances became increasingly unstable, with drops of up to 10% accuracy between successive epochs. We can then attribute their relatively good performances in training only to a form of overfitting. Similarly, in fine-tuning, the VGG architectures seemed very unstable, as shown by their training and validation accuracies, which shifted substantially between epochs. In addition, they never reached above 80% accuracy.
Eventually, only four models, InceptionV3, Xception, ResNet50V2, and ResNet101V2, reached performances over 90% accuracy, all in fine-tuning. However, these results proved to be only consistent for InceptionV3 and Xception as it is shown by the difference between training and validation behavior for the ResNets (Fig. 10 A). Likewise, only InceptionV3 and Xception showed stable results as shown by the validation accuracy and loss (Fig. 10 A,B). In addition, these two models showed no sign of overfitting, as shown by the consistent increase in accuracy coupled with decreasing loss after reaching more than 95% accuracy (a more detailed view of the losses can be found in Supplementary Fig. 1 ).
Ultimately, the most accurate and stable model was Xception trained in fine-tuning. The results showed that this architecture with this training strategy reached a stable accuracy of over 95% after 10 epochs and consistently improved until reaching 99% accuracy on the validation set while keeping a low loss.
Test step
Tables 6 , 7 , and 8 present the confusion matrices of the test data for the Xception models trained in fine-tuning after 15, 12, and 10 epochs, respectively, and the corresponding precisions detailed for each class. The actual classes are presented in the rows versus the predicted classes in the columns. In total, 2146 tubers were tested with the following repartition into the classes: 317 class 1 (healthy), 712 class 2, 591 class 3, 351 class 4, and 174 class 5. The test set was sampled randomly, respecting the proportion of classes presented in the complete dataset. At 15 epochs (Table 6 ), only 6 tubers out of 2146 were misclassified, resulting in accuracies above 99% for all classes. At 12 epochs (Table 7 ), the precision was above 95% for all classes except class 5, i.e. the tuber most severely affected by CS, for which the precision was 92%. At 10 epochs (Table 8 ), the precision was above 90% for all the classes except class 4. In this case, there was confusion between class 3 and class 5 with some tubers belonging to both classes. This mostly happened in class 4 for 10 epochs. The test of Xception, trained in fine-tuning with the last 15 layers unfrozen, shown on independent data, increased performances consistent with the training. The model discriminated perfectly healthy and lightly CS-infected tubers from the severe forms. Even the moderate symptoms (classes 3 and 4) can be distinguished with the optimal model. This means that optimally trained with adequate parameters and strategy, Xception can easily distinguish infection classes describing a 10% to 25% difference in infected areas. | Discussion
In plant breeding, tuber quality, in terms of shape, size, and severity level of CS, still relies on manual measurements and low-throughput visual assessments. These approaches are known to suffer from a lack of accuracy and reproducibility on top of being time-consuming and labor-intensive. ScabyNet proposes an image-based method divided into two modules to measure the morphological features of tubers and to assess the severity level of CS. The results of both modules indicated a high correlation between the manual measurements and the visual scoring of the evaluated tubers. Furthermore, a comparison of the applicability of the two modules was properly addressed by evaluating the time and accuracy.
Module 1
Several studies have been reported to evaluate tuber morphology and had reached high correlations with manual measurements. However, some inconsistencies in outputs can be found in the simpler approaches while the most advanced ones are costly, often impractical, or unsuitable for full-scale trials 12 , 13 , 17 , 33 . Here a low-cost image acquisition and processing approach was implemented, requiring only a simple RGB camera, a static frame, a light panel, and ScabyNet. While comparing with similar approaches 11 , 12 greater differences were found. First, the potential of these approaches is limited for images containing a greater variability in tuber shapes and color or containing several tubers or other objects. Second, to be properly processed, images must be acquired with a strict protocol, including using a lightbox 11 . In the same way, as described in 11 , ScabyNet also estimates circularity and LWR, which allow the screening of new varieties for different markets in terms of quality 13 , 49 . Unfortunately, ScabyNet can only evaluate two-dimensional tuber shapes, which is a limitation in terms describing the real consumer value of tubers. Different studies have been conducted evaluating all the possible views of an object, and a close approach has been reported even to predict diseases based on seed morphological parameters 50 . Using a cost–benefit instrument, Cgrain 51 , it is possible to obtain a full 3D view of the seed and analyzed parameters almost instantaneously. This could provide more detailed shape information compared to 2D imaging in the case of tubers. However, the size of the tubers would need a specific instrument or a 3D imaging platform, which in terms of time and labor will not represent an efficient approach.
Considering tuber shape, it has been found that it can be a complex parameter to evaluate, especially for those with abnormal shapes or really flattened, which depends mainly on the different purposes of the final use. In order to have a standard measurement, the length and the width were set up as the potato was always placed horizontally, but this parameter can change if the potatoes are placed in a vertical position. For this case, there is a step that analyzes if there is any inconsistency between width and length, identifying the longest axis the length, and the shortest one the width, while LWR and circularity are calculated directly with these two parameters.
An important aspect to highlight is that, compared with other approaches, ScabyNet, showed to be robust and accurate, but above all much easier to implement. In terms of comparison with TubAR 12 , despite being a free application for R software 46 , it requires some pre-install packages and program execution requires running several command lines. This means that only seasoned operators are able to use the application. For the case of the approach presented for the software ImageJ 11 , there is only a set of commands to follow but no complete application is proposed, which in the same way only can be applied by seasoned operators.
Module 2
In module 2, the best-performing model embedded in ScabyNet was the Xception architecture, which reached high accuracy and proved to be robust and stable for the tested dataset and the considered severity classes. The fine-tuning strategy was adapted to disease scoring, as it was a substantially different task from what the CNNs are usually trained for. The sole adaption of the weights in the deep neural network (DNN) part of the network, i.e., the classifier part, was not enough to distinguish between basic severity classes. This means that new weights in the filters were necessary to capture the specific patterns associated with CS coverage on the tubers. The specificities of the Xception architecture enabled to adapt efficiently the model to CS scoring with a reduced dataset. Other tested architectures (except InceptionV3) and training strategies could not provide the same performances or stability. They would either require more data to converge toward the right filters extracting the right features in the cases of the advanced architecture; in the case of the VGGs, they are simply not complex or deep transfer the common features learned from generic data to be applicable to the CS detection and scoring problem. Most likely, the advanced networks would also require deeper training of the CNN part, i.e., consisting in unfreezing more layers and reaching earlier layers in the backpropagation. Consequently, they would be more likely to show instabilities or even overfitting considering the relative size of the available database in agriculture and the ones used to calibrate the pre-trained models.
With the optimal model, some confusions are still possible, mostly between severity levels that are close to each other. A solution to improve the model should be to determine classes as severity profiles rather than based on ranges of infected areas and thus match more the assessment rules of breeders. As no other studies to our knowledge tackle the issue of scoring potato CS with an automated image-based approach, it is not possible to compare the obtained results. A solution to improve both the performances and the sensitivity of the model, i.e. being able to distinguish between finer differences in infection profile, would be to fine-tune more deeply or even retrain the network with a generic plant disease database like “PlantVillage” or “PlantImageAnalysis” 52 , 53 . These databases contain hundreds of thousands of examples of healthy and infected plants from more than a hundred species, different pathogens, and infected organs. The generic features learned from that should be a better starting point to adapt the network to CS, or even to generalize ScabyNet to various plants and diseases. | Conclusion
This study proposes a novel application named ScabyNet that combines traditional image processing techniques and deep learning algorithms to estimate potato tuber morphology features and the detection and severity scoring of CS disease. This approach demonstrated operational qualities such as versatility and efficiency in analyzing images of potato tubers of various sizes, shapes, and colors, and with different levels of CS disease severity. The accuracy of ScabyNet was validated through correlation with manual measurements and with a previously established method for measuring potato tuber length and width, as well as visual correlation with disease severity scores. Among six different architectures and two training strategies tested, the one selected for ScabyNet outperformed the others with an accuracy of 99%.
Notably, ScabyNet was developed as a lightweight application that relies solely on CPU computation, enabling greater portability and ease of deployment on a wider range of computing systems. These findings demonstrate that ScabyNet represents a significant advancement in agricultural research, providing an efficient, accurate, and objective method for analyzing tuber morphology features and estimating CS disease severity in potato crops.
In future research, it is planned to extend the applicability of ScabyNet to include additional color ranges of tubers and other potato varieties and incorporate semantic segmentation to achieve higher precision and accuracy in tuber identification. The purpose would be to reach finer levels of discrimination between infection stages and to recognize specific patterns of the symptoms to match better with phytopathology. Furthermore, incorporating additional spectrometric data, such as hyperspectral imaging, may provide further insights into the finer phenomena related to the disease and allow for the detection of early symptoms before they become visible 20 . | Common scab (CS) is a major bacterial disease causing lesions on potato tubers, degrading their appearance and reducing their market value. To accurately grade scab-infected potato tubers, this study introduces “ScabyNet”, an image processing approach combining color-morphology analysis with deep learning techniques. ScabyNet estimates tuber quality traits and accurately detects and quantifies CS severity levels from color images. It is presented as a standalone application with a graphical user interface comprising two main modules. One module identifies and separates tubers on images and estimates quality-related morphological features. In addition, it enables the extraction of tubers as standard tiles for the deep-learning module. The deep-learning module detects and quantifies the scab infection into five severity classes related to the relative infected area. The analysis was performed on a dataset of 7154 images of individual tiles collected from field and glasshouse experiments. Combining the two modules yields essential parameters for quality and disease inspection. The first module simplifies imaging by replacing the region proposal step of instance segmentation networks. Furthermore, the approach is an operational tool for an affordable phenotyping system that selects scab-resistant genotypes while maintaining their market standards.
Subject terms
Open access funding provided by Swedish University of Agricultural Sciences. | Supplementary Information
| Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-023-51074-4.
Acknowledgements
We would like to thank Anja Haneberg (Graminor) for providing significant technical and laboratory assistance throughout this study, and Inger-Lise Wetlesen Akselsen (NIBIO) for providing bacterial isolates used in the inoculation trials.
Author contributions
A.C., J.D., and M.K.A.: Conceptualization, project planning, and funding acquisition. F.A. and F.L.: Methodology, image and data analysis, writing the first draft. J.D., N.E.N., and F.L.: Image and plant material acquisition. All authors read and reviewed the final version of the draft.
Funding
Open access funding provided by Swedish University of Agricultural Sciences. This research was funded by The Research Council of Norway, The research funds for agriculture and food industry, Project No. 294756 awarded to NIBIO.
Data availability
The datasets generated and analyzed during the current study are not publicly available due to being obtained from a commercial breeding program but are available from the corresponding author on reasonable request.
Competing interests
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential competing of interest. | CC BY | no | 2024-01-15 23:41:54 | Sci Rep. 2024 Jan 13; 14:1277 | oa_package/c6/3f/PMC10787732.tar.gz |
PMC10787733 | 38218751 | Methods
Deep convolutional neural networks for face memorability prediction
It is common knowledge that deep convolutional neural networks have great power in different machine learning tasks like regression and classification. In this work, we trained two models (MemVGG, IncResMem) on LaMem with the addition of the original MemNet model for predicting image memorability then fine-tuned them on 10k US Adult Faces Database. The backbone of MemNet, MemVGG, and IncResMem are AlexNet, VGG16, and (InceptionResNetv2 and VGG16), respectively. The backbone models of MemNet and MemVGG are pre-trained on both object categories(ImageNet) and scene categories(Places). For the InceptionResMem model, we used pre-trained InceptionResnet model which was pre-trained on Imagenet with addition to a hybrid pre-trained VGG, so we took advantage of both Imagenet and Places classes for predicting memorability scores. The reason behind this choice is that the LaMem dataset is very diverse; It contains objects, scenes, and a combination of objects and scenes. Consequently, to ensure that the model extracts good representation of these images, we chose hybrid models for the backbone of our models. In MemNet, Alexnet features are used, followed by three fully connected layers. The input images were first resized (256,256) pixels with a bi-linear transformation then images were cropped to a (224,224) square shape from the center of the image. All the input images were normalized to the range between 0 and 1. The batch size is 64 in all of the models and Euclidean distance is used as the loss function. The MemVGG model is very similar to MemNet, except the fact that VGG16 is used as the backbone of the model and VGG16 features are used instead of the AlexNet features. In this model, we didn’t need to normalize the input images between 0 and 1. The last model, which we trained on LaMem dataset is IncResMem. Its architecture is shown in Fig. 3 . InceptionResNetv2 achieves great performance in image classification task on ImageNet and also can be trained much faster than ResNet models. We combined InceptionResNet categories and VGG16 features, so the layer before the output includes 5096 nodes. The output predicts the memorability of images. There are two separate parallel branch in this network. The size of the input for the VGG16 branch is (224,224) and for the InceptionResnetv2 branch is (299,299).
After training these models on LaMem, we tested them to predict the memorability of the face images. We checked them on the 10k US Adult Face Database and as we expected, these models failed to predict the memorability of faces. As a result, using transfer learning principles, we fine-tuned these three models on US 10k Adult Face Database.
In order to train the models, we split the 10k US Face Database images into train, validation, and test split. We used 80 percent of the data as the training samples and 10 percent of the data for each of the test and validation splits. Again, Euclidean distance was used as the loss function and we set the batch size to 64. Moreover, we leveraged Adam Optimizer to train our models. Due to the large false alarm rate in human face images, we trained our models both with raw memorability scores (computed by hit rate) and corrected memorability scores (considering false alarm rate). We also tried some simple augmentations on the dataset and found, the score of the models will slightly increase if we use simple augmentation like random horizontal flipping ( ).
As we mentioned in the Result section, following Khosla et al. 11 , we used linear transformation to set the mean and variance of the predicted memorability samples equal to the train data.
Equivalently, we can show this similarly for the variances.
Deep convolutional neural networks are capable of extracting the most generic features when they are trained on huge datasets. While we are dealing with face images within a considerably small dataset, we decided to use deep models that are pre-trained on a large face dataset. Therefore, we utilized SeNet50, ResNet50, and VGG16 which are pre-trained on VGGFace dataset 41 for a face identification task. Face identification is a type of face recognition task where the model weights are optimized to match a human face from a digital image against a database of face identities. VGGFace dataset contains about 2.6 million images within more than 2.6k identities. Images in this dataset have large variations in pose, age, illumination, ethnicity, and profession.
Then, we introduce seven new memorability-predicting models by leveraging SeNet50, ResNet50, and VGG16. All the Face models (SENet, ResNet, and VGG16) were first pre-trained on the VGGFaces database for a face identification task. This database includes about 2.6 million images from more than 2.6k different identities. The architecture of SENet and ResNet are very similar to each other, and the only difference is the existence of Squeeze and Excitation blocks in SENet. However, their architecture is very different from VGG16. VGG16 is a shallower network in comparison with ResNet50 and SENet50. Moreover, all the convolutional filters in VGG16 are and max-pooling kernels have a size of . These models can be divided into three groups. In the first group, we fine-tuned these three models with the 10k US Faces Database to predict face memorability. Features of the last layer from each of the three mentioned models were combined together two by two and formed the second group. Then we fine-tuned them to predict face memorability scores. ResVGG was produced by combining the features from the VGG16 network and the ResNet network, SenVGG was created from combining the SeNet50 features and the VGG16 features, and a combination of the SeNet50 and ResNet50 features was used to build the SenRes model. Finally, we combined features from all these three models and proposed the SenResVGG network. We trained all models with a 0.5 chance of horizontal mirroring of the images as an augmentation. We observed that this augmentation helps overcome the over-fitting problem and also increases the rank correlation score.
Statistical tests
All the reported correlations in Tables 2 , 3 and 4 are significant . Further, to demonstrate that the correlation scores are higher than any correlations that could be produced from a random distribution, we conducted another experiment and generated two pairs of random vectors of size 8k, for 1000 times and calculated their Spearman’s correlation scores. The average correlation score was about 0 (−0.0003) and the maximum correlation score was 0.028.
Human participants data
The human participants’ data used in this study are publicly available datasets 4 , 11 . The participants were provided with an informed consent form which they signed and they were compensated for their time. The protocol was reviewed by the institutional review board at the Massachusetts Institute of Technology. All methods were carried out in accordance with relevant guidelines and regulations. | Results
This section presents the key findings and discoveries obtained from the experimental analysis of training models for face photographs memorability prediction. First, we demonstarte that the current memorability models that are trained on LaMem fail in predicting face memorability scores. Then, we propose new architectures for predicting face memorability scores. These models perform close to human consistency in rank correlation. At the end, we showed that the trained deep models, can be used in predicting memorability scores of both oval-shaped and square-shaped face images.
State-of-the-art memorability models fail to predict face memorability
We evaluated three state-of-the-art memorability models which are trained on LaMem dataset on the task of predicting face memorability. For this, we used 10k US Adult Faces Database 4 which contains 10,168 natural face photographs. 2222 of these faces were annotated with memorability scores. All the faces in this dataset are oval-shaped with the same height of 256 pixels but variable widths.
We tested MemNet on the task of predicting memorability scores of the 10k US Adult Faces Database. Figure 1 depicts the architecture of the MemNet. The original MemNet is an old version and only available on Caffe. Therefore, we followed the methods described in the paper and retrained the model in PyTorch 33 . MemNet leverages AlexNet 22 as its backbone. AlexNet showed a great performance in image classification task in 2012 and it began a revolution in this computer vision task. MemNet was the first model that utilized deep convolutional neural network for predicting image memorability and it outperformed all of its previous models that were designed to predict memorability scores. MemNet achieves 0.64 rank correlation on memorability scores corrected for false alarms and 0.57 on pure hit rate memorability scores in LaMem dataset. The architecture of MemNet consists of five convolutional and three max-pooling blocks that extract the features for predicting image memorability. Our Pytorch MemNet model reached the same level of performance on LaMem dataset that was mentioned in the original paper 11 . In addition to MemNet, we also fine-tuned two other models; ( MemVGG depicted in Fig. 2 , and IncResMem depicted in Fig. 3 ) using LaMem dataset to test them on the 10k US Adult Faces Database. MemVGG is the memorability predicting model with VGG16 as the backbone architecture. VGG16 is a newer model in comparison to AlexNet and achieves 92.7% top-5 test accuracy on the ImageNet dataset classification while AlexNet achieves 84.6% top-5 test accuracy. This model consists of fifteen convolutional blocks to extract the features followed by three fully connected layers to find the output (image classification on ImageNet 34 or image memorability on LaMem). We fine-tuned a hybrid VGG16 model which was pre-trained on Places 35 and ImageNet dataset to predict memorability scores on LaMem dataset. The average performance of this model was 0.63 on 5 test splits of LaMem dataset, however it does not require any complicated preprocessing steps like original AlexNet-based MemNet (see Methods section for more detail). The third model we trained on LaMem dataset is IncResMem . Recently Inception models have shown an outstanding performance in different machine vision tasks like image classification. Moreover, it has been shown that training with residual connections accelerates the training of Inception networks significantly. Therefore, we decided to use InceptionResNet 36 for image memorability prediction. One of the benefit of InceptionResNetv2 model is that it is impervious to noisy labels. Another important feature of residual networks is that they can have very deep architecture while avoiding vanishing gradient problem. We took a pre-trained InceptionResNetv2 model from Keras 37 that was pre-trained on ImageNet for image classification. This model is deeper in comparison with the previous two models. To obtain better performance, we combined the categories from InceptionResNetV2 with VGG16 features to build the IncResMem model (see Fig. 3 ). In other words, this model uses both semantic features of the images and also images categories. Stem, Inception and Reduction blocks are identical to the blocks that were introduced by Szegedy et al. 36 . This model achieved 0.646 rank correlation score on the LaMem dataset.
After training these models on LaMem dataset and obtaining valid models for predicting image memorability, we investigated how well they can predict memorability of face images. We observed these models perform poorly in estimating face memorability. The results of this experiment are presented in Table 1 . According to this table, the Spearman’s correlation score between predicted memorability scores and ground-truth memorability scores is reported in two categories - hit rate and true hit rate (corrected hit rate). The true hit rate (corrected hit rate) is clacualted by subtracting the false alarm rate from the hit rate. This table illustrates that these models are not able to accurately predict the rank of the memorability scores of face images. The distribution of the predicted face memorability scores of the mentioned three models is shown in Fig. 4 . As depicted, these models clearly fail to predict memorability scores for face images. As a result, in the next section we propose and train new models for face memorability prediction.
Memorability models for faces
As we demonstrated in previous section, the models that were trained on the LaMem dataset did not show acceptable performance on predicting face memorability scores. We propose ten new models for predicting face memorability in two groups. The first group consists of the three computational models that are pre-trained on LaMem dataset. We introduced these models (MemNet, MemVGG, and IncResMem) in the previous section. We fine-tuned these models on the 10K US faces dataset for predicting the memorability of faces. The second group includes seven models that are pretrained on VGGFace dataset 32 and we fine-tuned them on 10k US faces dataset to estimate face memorability scores. Since the memorability dataset for face images is relatively small, the starting point of training process is crucially important and we thought that using pre-trained face models will help us in estimating the face memorability. This seven models include VGG16, ResNet50, SENET50 and all two by two and three by three combinations of features from these three models. We trained all these proposed models both with memorability scores computed by hit rates and the memorability scores corrected by false alarms.
Consistent with Khosla et al. 11 , we observed when the corrected hit rate scores are used, all models outperform the case when hit rate scores are used. That is because false alarm rate introduces noise to memorability scores, therefore, the models perform better when we reduce the noise by correcting for false alarms. Throughout this paper, we refer to the corrected hit rates as true hit rates and the uncorrected ones as hit rates. Moreover, the proposed models of both groups outperformed (see Table 2 and Table 3 ) the classic MemNet 11 (see Table 1 ) for predicting face memorability. Furthermore, we observed that the models that were based on the pre-trained face models (group 2, see Table 3 ) performed relatively better than memorability networks (group 1, see Table 2 ) when fine-tuned to predict face memorability. When the models are pre-trained on face images, the weights of the models are optimized to find the face representations that are most useful in the face recognition task. As a result, with only a small dataset of face memorability scores, these models can be further tuned to do a better job of predicting face memorability scores. Comparing the performance of FaceMemVGG in Table 2 to VGG16 in Table 3 evidently shows that the main reason behind the performance difference is the different pretraining schemes. These two models have the same architecture however it is observed that the performance is much better when the model is first pre-trained on the VGGFace database in a face identification task. We should add that Human consistency for the 10k US Faces Database is equal to 0.68 and 0.69 when hit rate and corrected hit rate scores are used, respectively. SENet and SENVGG resulted higher rank correlation compared to the other models. These models employ squeeze and excitation blocks which are beneficial in improving the representational power of the network. These squeeze and excitation blocks are used before summation and also with the identity branch. Simple schema of how these blocks are used has been shown in Fig. 5 .
Memorability of oval-shaped and square-shaped faces
The 10K US face dataset contains oval-shaped faces with white background. However, it is time and resource-consuming to convert all the face images with the format of this database, whenever we want to predict the face memorability score. Therefore, we decided to test our models on the same set of oval-shaped and square-shaped faces. We utilized StyleGAN2 38 pretrained on FFHQ dataset 39 , to generate 8k high-quality and realistic face images. The generated faces from the StyleGAN2 have resolution in three channels. We changed their size in the pre-processing step, and then calculated their memorability scores with all our models. In order to ovalize these images, we leveraged MTCNN 40 to detect the coordinates of the faces in the image and then masked an oval on it to make them in the format of the 10k US face database (See Fig. 6 ). Table 4 shows the Spearman’s rank correlation of predicted memorability scores for the oval-shaped and square-shaped face images. We observe that except for SENVGG, other models that were pre-trained on VGGFace dataset and then fine-tuned with the 10K US face database, result in higher correlation scores compared to the models which are first fine-tuned on LaMem, then fine-tuned on the 10K US face database.
Table 4 shows that the models with high correlation scores are able to extract face representations even from square-shaped faces and predict their memorability scores. As a result, these models can be utilized in predicting the memorability scores of face images, even without masking and ovalizing them. | Discussion
Focusing on face images, we showed the existing models used in predicting image memorability fail to predict face memorability scores. We specifically first evaluated MemNet 11 , which is trained on LaMem 11 dataset. Moreover, we leveraged two other convolutional neural network model architectures (VGG16 29 and InceptionResNet 36 ) and trained them on LaMem dataset for prediction of image memorability. These three models showed great performance in predicting memorability of object, scene and animal images on LaMem Dataset, albeit as we expected, they failed to predict the face memorability scores in 10K US face dataset. The main reason is that these models are trained on LaMem which is a large memorability dataset containing images of objects and scenes. We employed 10K US Face Database 4 to fine-tune these three models and observed the rank correlation score significantly increased (see Table 2 ).
In addition to the three models previously mentioned, we introduced seven new models to estimate face memorability scores. These models are built upon state-of-the-art pre-trained face recognition models 29 – 31 , 41 , which we fine-tuned for memorability prediction. Our hypothesis was that these face recognition models, due to their efficiency in extracting facial features, would provide a stronger foundation for predicting face memorability. Furthermore, these new models benefit from simpler preprocessing steps compared to earlier models like MemNet. The backbone architectures of our seven models include VGG16 29 , SENET50 31 , and ResNet50 30 . Each of these has demonstrated excellent performance in various machine learning tasks, particularly in face recognition. Our observations confirmed that models pretrained on facial data are indeed more effective in predicting face memorability. This supports our hypothesis that these models are better at extracting facial features relevant for memorability. Additionally, we considered the possibility that the reduced performance of models trained on the LaMem dataset might be due to the presence of outliers which are unnaturalistic images or those with collage structures. Yet, given that we fine-tuned these models using the 10K US face database, it seems improbable that these few outliers would significantly affect our model’s performance. Future research might delve deeper into examining this aspect.
The 10K US face database primarily contains ovalized face images. However, there may be scenarios where predicting memorability scores for square-shaped face images is desirable. To address this, we tested whether our proposed face memorability prediction models maintain consistent performance on square-shaped faces. It should be noted that in our experiment, a square-shaped face image is not identical to an ovalized one (Fig. 6 ). Square-shaped images may contain additional elements, such as hairstyles or background features, which could potentially confound the model’s ability to accurately predict face memorability. Yet, our results show that these models can reasonably predict the memorability of square-shaped face images, as indicated by the rank correlation data presented in Table 4 .
The proposed models in this work pave the way for predicting memorability of face images in new datasets. Acquiring memorability scores for images requires running large-scale visual memory experiments and crowd sourcing participants usually on online platforms like Amazon Mechanical Turk. These experiments are time-consuming and costly. Having these models will remove this barrier and provides a great opportunity to run future experiments on face memorability. | With the advent of social media in our daily life, we are exposed to a plethora of images, particularly face photographs, every day. Recent behavioural studies have shown that some of these photographs stick in the mind better than others. Previous research have shown that memorability is an intrinsic property of an image, hence the memorability of an image can be computed from that image. Moreover, various works found that the memorability of an image is highly consistent across people and also over time. Recently, researchers employed deep neural networks to predict image memorability. Here, we show although those models perform well on scene and object images, they perform poorly on photographs of human faces. We demonstrate and explain why generic memorability models do not result in an acceptable performance on face photographs and propose seven different models to estimate the memorability of face images. In addition, we show that these models outperform the previous classical methods, which were used for predicting face memorability.
Subject terms | Every day, we meet new people or encounter new faces on social media. Some of these faces stick in our minds, while others are forgotten quickly. Image memorability is the probability that an observer will detect the repetition of an image after a single exposure to that image when presented amidst a stream of images 1 . Despite individual differences in remembering visual events 2 , 3 , it has been shown that the image’s memorability is consistent across people and over various time lags 1 , 4 – 10 . In essence, people show consistent behavior in remembering some images and forgetting others. These findings have led to the idea of predicting an image’s memorability based solely on the image itself, estimating what images are more or less memorable than others 11 .
Previous studies have indicated that individuals fail to accurately predict the memorability of images 5 . Research has found that images that are standing out from their context are more likely to be remembered 12 . Additionaly, distinctiveness plays a key role in face recognition, i.e., distinctive faces are recognized better than typical ones 13 . Furthermore, faces perceived as unusual in appearance have been shown to be remembered better than those considered typical 14 – 16 .
Bainbridge et al. 4 investigated what factors are contributing to face memorability, examining the role of twenty personalities (e.g. interesting/boring and calm/aggressive), social, and memory-related traits. After running a multiple linear regression model on these face attributes and memorability scores, they found that the combination of these attributes can only explain a small portion of the variance of the memorability scores. This suggests that the memorability of an image depends on the image itself, rather than on a limited set of identifiable attributes. It is worth highlighting that the concept of face memorability pertains to the memorability of a facial photograph, rather than an individual’s actual visage (i.e. a photograph of Tom Cruise could be more memorable than another photograph of his). Indeed, in our recent work 17 , we developed a method to control and modify the memorability of a face photograph by photo-editing techniques based on generative models.
Various studies have aimed to predict image memorability. One of the earliest methods was proposed by Khosla et al. 18 , which used dense global features such as HOG 19 and SIFT 20 for predicting face memorability. However, these methods were not fully automatic and required manual tuning. Convolutional neural networks 21 have shown great performance in image classification task 22 and since then have been used in various computer vision and machine learning tasks. Khosla et al. 11 introduced the first model that used a convolutional neural network model called MemNet for predicting image memorability. It was trained by fine-tuning Hybrid-CNN 23 and performed near human consistency in rank correlation.
Most recent research has focused on improving the performance of these models by employing attention mechanisms 24 and residual blocks 25 . Lu et al. 26 tried to find out what are the elements that make outdoor natural scenes memorable. They discovered combining high-level features of scene category and deep features can result in improving the model performance in predicting memorability. While memorability is an intrinsic feature of an image, some works studied the extrinsic effects such as eye movements in predicting the memorability 12 .
We see people’s faces in different conditions e.g. while they are happy, angry, or neutral. Bainbridge et al. 27 shed light on how memorability changes with different transformations of the human face (neutral, happy, angry, 3/4 view, and profile view). They found that memorability is highly consistent within each image transformation as well. It means regardless of the person’s face being neutral or happy, if she has got a memorable face, we’ll remember her face and vice versa.
In this work, we have focused on predicting the memorability of face photographs. As Squalli-Houssaini et al. 28 have demonstrated, deep neural network models (including MemNet 11 and other memorability networks that are trained on LaMem data set) succeed in predicting the memorability of scene and objects images. Here, we evaluated several memorability models (including original MemNet) which are trained on LaMem dataset on predicting memorability of face images. Consistent with Squalli-Houssaini et al. 28 , our results demonstrate that these memorability models fail in predicting the memorability of face photographs. Then, using 10k US Adult Faces Database 4 , we fine-tuned VGG16 29 , ResNet50 30 and SENet50 31 which are pre-trained on VGGFace data set 32 for a face identification task to predict the face memorability scores. We also fine-tuned MemNet which is trained and performs well on LaMem 11 data set to predict face memorability. We hypothesize that the models which are pre-trained on face recognition task and then fine-tuned on face memorability prediction task outperform those which are pre-trained on LaMem and then fine-tuned for the face memorability prediction. The main reason is that models pre-trained on face images for a face recognition task will be more efficient in extracting face features that later can be utilized for predicting face memorability scores. Our proposed models outperformed the previous model 18 and got close to human consistency correlation in predicting face memorability. | Acknowledgements
This study was supported by the Canada First Research Excellence Fund (CFREF) through Western’s BrainsCAN Initiative and a Vector Institute Research Grant to YM. The Computational modeling was conducted on Compute Canada resources. MY was supported by a Vector Institute Masters Scholarship in Artificial Intelligence.
Author contributions
M.Y. and Y.M. conceived the study, M.Y. conducted the modelings and experiments, M.Y., and Y.M. analysed the results and wrote the manuscript.
Data availability
The codes and models trained and/or analyzed during the current study are publicly available in https://github.com/mamyou96/FaceMemNet .
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:54 | Sci Rep. 2024 Jan 13; 14:1246 | oa_package/bb/1d/PMC10787733.tar.gz |
||
PMC10787734 | 38218969 | Introduction
Correct and cost-effective species identification is crucial in various research areas, including biodiversity assessments, where obtaining reliable information on species’ occurrences and distributions is pivotal. If species cannot be morphologically identified to the species level, they are often assigned to higher taxonomic levels, leading to less detailed analyses and consequently imprecise conclusions 1 , 2 . However, identification of samples using COI-barcoding is expensive, time consuming 3 and therefore not feasible in large biodiversity assessments including large numbers of specimens.
Matrix Assisted Laser Desorption/Ionization Time-of-flight mass spectrometry (MALDI-TOF MS) is a rapid species identification method that measures a proteome fingerprint to identify specimens using a reference library. With few preparation steps, peptides and proteins are extracted from tissue and embedded in a matrix absorbing laser radiation while measuring ionized, intact compounds in a mass spectrometer 4 . This method is routinely applied for the identification of microorganisms such as bacteria, viruses and fungi 5 – 7 . It was also used for food fraud detection 8 , 9 or to check food adulteration 10 . In pilot studies, it was successfully applied for identification of metazoans such as copepods 11 – 17 , isopods 18 , 19 , different groups of Cnidaria 20 – 22 , molluscs 23 , fish 8 , 24 and especially disease vectors such as ticks, sandflies or mosquitoes 25 – 29 . Most studies only analyzed a few species or were limited to a certain taxonomic group while studies across different classes and phyla are still missing. Also, no gold standard protocol for metazoan analytics has been established yet. Systematic tests, how data processing will affect the identification success and whether and how pipelines need to be adapted to higher-taxonomic-level identification are also missing.
For the first time, we present a generalized workflow for species identification of metazoans as well as the subsequent bioinformatics using a wide spectrum of marine taxa. We emphasize the importance of adjusting bioinformatics to the data set and finally prove the power of proteomic fingerprinting for differentiation of morphologically cryptic, closely related marine species and beyond mere species identification on sex level, making it a promising tool for ecological research.
To investigate this, we start by looking at the sample preparation in terms of tissue to matrix ratio and how this effects mass spectra quality. This is followed by identifying the crucial steps during data processing for classification using random forest (RF). Subsequently, we analyze how large reference libraries should be to optimize rf-model capabilities for identification. Finally, we apply these findings to our dataset of almost 200 marine taxa to test both species identification as well as classification on a higher taxonomic level. | Methods
Sample material
Tissue for measurements was taken from the marine organisms tissue bank of the Senckenberg am Meer, German Centre for Marine Biodiversity Research, which was established using samples from numerous studies 30 , 51 – 57 (Supplementary Table S1 for accession numbers) on North Sea metazoans. The material from this collection was taken from specimens processed for COI-barcoding to create reference libraries for a variety of marine animal groups. During this process, tissue samples of the respective specimens were stored in ethanol at − 80 °C. Tissue samples were available for Bivalvia (muscle, 18 species), Cephalopoda (muscle from arm, 12 species), Gastropoda (muscle from foot, 24 species), Polyplacophora (muscle from foot, 2 species), Ascidiacea (tissue, 1 species), Teleostei (muscle, 67 species), Elasmobranchii (muscle, 7 species), Malacostraca (muscle from foot or chelae, 39 species), Thecostraca (muscle from foot, 1 species), Pycnogonida (leg fragment, 1 species), Asteroidea (tube feet, 10 species), Ophiuroidea (tissue from arm, 10 species) and Echinoidea (tissue from the base of the tubercle, 6 species) (n species = 198, n specimens = 1246).
Sample preparation
The basic protocol of sample preparation was the same for all analyzed tissue samples. A very small tissue fragment (< 1 mm 3 ) was incubated for 5 min in HCCA as a saturated solution in 50% acetonitrile, 47.5% molecular grade water and 2.5% trifluoroacetic acid. Tissue from crustacean Cancer pagurus Linnaeus, 1758, the fish Clupea harengus Linnaeus, 1758, the cephalopod Eledone cirrhosa (Lamarck, 1798) and the echinoderm Stichastrella rosea (O.F. Müller, 1776) was used to find an optimal tissue to HCCA matrix ratio. Tissue was weighed on a METTLER TOLEDO XS3DU micro-balance and the amount of matrix was adjusted to tissue weight to obtain the desired ratios ranging from 0.012 to 200 μg μl −1 . After incubation, 1.5 μl of the solution was transferred to 10 spots on a target plate, respectively. Mass spectra were measured with a Microflex LT/SH System (Bruker Daltonics) using method MBTAuto. Peak evaluation was carried out in a mass peak range between 2 and 10 k Dalton (Da) using a centroid peak detection algorithm, a signal to noise threshold of 2 and a minimum intensity threshold of 600. To create a sum spectrum, 160 satisfactory shots were summed up.
Resulting from observations during this initial test, a fast applicable protocol was developed without the need to weigh each tissue sample. Our tests allow us to identify inferior sample-to-matrix ratios and thus adapt the sample preparation. Also they showed that the spectrum quality is sufficient across a wide spectrum of tissue-to-matrix ratios. Thus, we concluded that the Matrix volume to be added to tissue samples can be adjusted depending on tissue volume, so that tissue samples are always completely covered by HCCA matrix with a small layer (ca. 1 mm) of supernatant. Samples were incubated for 5 min and 1.5 μl of the solution were transferred to a single spot on a target plate for measurement. Each spot was measured between two to three times.
Mass spectra processing in R
Mass spectra data was imported to R 58 using MALDIquantForeign 59 and further processed using MALDIquant 60 . Mass spectra were trimmed to an identical length from 2 to 20 kDa. Subsequently, spectra were square root transformed, smoothed using Savitzky Golay method 61 , baseline corrected using SNIP approach 62 and normalized using total ion current (TIC) method.
Spectra were quality controlled using the command ‘screenSpectra’ from the R-package MALDIrppa 50 . Mass spectra with a notably high a-score were checked by eye and discarded if mass spectra were of bad quality. If due to this, only a single specimen for a certain species was retained, the remaining specimen was discarded from the data set.
Evaluation of random forest model for identification
Besides initial sample preparation and subsequent data processing, we tested how to improve a random forest (RF) model used for species identification. Optimal number of trees and variables was tested in a previous study 63 . Here we assessed the effect of minimum number of specimens per species category on the resulting model power. We sampled the dataset using two to 11 specimens per species including only species with at least 11 specimens per class (n = 20). For each minimum number of specimens, 100 data sets were sampled using ‘sample_n’ from R-package dplyr 64 , a RF model was created and the OOB errors assessed accordingly.
Standardization of data processing
Based on literature research and own observations, three data processing steps were identified, which may have a severe impact on data and the resulting quality of a random Forest (RF) classification model 65 . (I) Iterations of baseline subtraction: this is a first manipulation step to reduce chemical noise and is carried out iteratively 66 . Increasing iterations will result in loss of low intensity peaks. (II) Signal to noise ratio (SNR) during peak picking: an increase in SNR will exclude signals of low intensity. The higher the SNR value, the less peaks will be kept. (III) Half window size (HWS) during peak picking: within the HWS the peak with the highest intensity will be chosen as the resulting peak during peak picking. The higher the HWS is chosen, the less peaks will be picked across an entire mass spectrum range.
Interactive effects of these data processing steps were tested using the classification success by a random forest model as target variable: iterations of baseline estimation and peak detection HWS were varied both between 5 to 30 and SNR from 3 to 20. In total, 12,186 analyses were carried out. In all cases, peak binning using ‘binPeaks’ from R-package MaldiQuant was repeated until the number of variables in the data did not further change. The RF model (ntree = 2000 and mtry = 35) was trained on the Hellinger transformed peak intensities as suggested by Rossel and Martínez Arbizu 43 . The RF out-of-box (OOB) error was used as measure for classification success. For these analyses, based on the results from RF-model evaluation, only species were included with at least six specimens. To investigate the main drivers of classification success, a generalized additive model (GAM, family: binomial; link function: logit) was calculated.
Testing the classification success
In concordance with the results from the previous tests, only species with at least six specimens were included in the model. Mass spectra from these species were processed according to the results from the test on variation of HWS (7), SNR (3) and baseline iteration (22). To test the overall classification success on species level, single specimens were separated from the RF training data set and subsequently identified using this model. After classification, the post-hoc test by Rossel and Martínez Arbizu 63 , 66 using the R-package RFtools ( https://github.com/pmartinezarbizu/RFtools ) was applied to verify RF classification. This post-hoc test uses the empirical distribution of RF assignment probabilities from the RF model and compares the assignment probabilities of newly classified specimens to this distribution. Whereas classified specimens with assignment probabilities falling within this empirical distribution are considered true positive (tp), specimens with probabilities of assignment significantly different to this distribution considered false positive (fp).
Case studies
In order to show the applicability of MALDI-TOF MS, we present two model cases. First, data of the North Sea starfish Astropecten irregularis (Pennant, 1777) were investigated based on MALDI-TOF mass spectra. This species was found to be genetically divergent 30 while revealing a high morphological similarity. Differentiation of species was tested using RF models. Furthermore, data on the crustacean Euterpina acutifrons (Dana, 1847) from Rossel and Martínez Arbizu, 2019 was analyzed to show the applicability for sex level differentiation using hierarchical clustering and RF. Based on the Gini index the 30 most important peaks for species/sex differentiation in a RF model were extracted and investigated to show the expression within the respective groups.
Phyla and class models for identification
To test whether specimen can be identified on an above-species level, a RF model containing only class- and phylum-categories was applied. All spectra from species to be classified were excluded from the model to evaluate its use for specimen not included in a library. The respective specimens were identified using the model and the predicted class/phylum was tested with the RF post-hoc test. To test classification on phylum level, 1246 specimens from 198 species were included. On class level, 1227 specimens from 195 species were analyzed. | Results
The data set contained 1246 specimens from 198 taxa including echinoderms (Asteroidea, Echinoidea and Ophiuroidea), molluscs (Bivalvia, Gastropoda, Polyplacophora and Cephalopoda), arthropods (Crustacea, Pantopoda) and chordates (Tunicata, Vertebrata: Teleostei, Elasmobranchii). For 1139 specimens a published COI barcode or another molecular identifier is available (Supplementary Table S1 ). The remaining specimens were identified morphologically. For 226 specimens attempts to obtain mass spectra either failed or were of minor quality and discarded.
Sample preparation
To determine the concentration range for successful measurements, weighted tissue samples were mixed with varying amounts of α-cyano-4-hydroxycinnamic acid (HCCA). In total, 15 different tissue/matrix concentrations were tested ranging from 0.01 to 200 μg μl −1 (Fig. 1 A). Despite variations between samples, high quality mass spectra were generally assessed in a concentration range from to 3.1 to 12.5 μg μl −1 . The largest concentration range for successful measurements was recorded for the echinoderm Stichastrella rosea (sample MT03612), with successful measurements across almost the entire concentration range. No measurements were obtained for concentrations of 200 μg μl −1 . It was only when concentrations reached 12.5 μg μl −1 or lower that results were obtained for all specimens.
Good measurements were obtained from small tissue samples, when these were completely submersed in the HCCA solution within a 1.5 ml microcentrifuge tube (Fig. 1 C). Before samples/matrix ratio was too low to detect a signal, an increase in baseline height in the lower masses was recorded (Fig. 1 B). When sample to matrix ratio was increased, an increase in noise was observed (Fig. 1 D). Quality improvement of spectra from high tissue-to-HCCA matrix ratios was achieved by dilution. This was tested using tissue from the crustacean Cancer pagurus (sample MT01453). Concentrations were diluted from an initial concentration of 200 μg μl −1 that resulted in no mass spectra at all. The measurements from diluted preparations then showed similar results as measurements made with the respective concentrations from undiluted sample preparations (compare Fig. 1 A brown and red results).
Optimize random forest (RF) model for classification
For application of RF as a method for classification, we evaluated how strongly the number of specimens per species influences model error. A repeated (n = 100) random sampling of two to eleven specimens for species with at least 11 specimens in the data set (n = 20) was carried out. This data was then used to create RF models and the out of bag error (OOB) was assessed as a quality criterion. Increasing the number of specimens per species resulted in a decrease of OOB error (Supplementary Fig. 1 ). With only two specimens per species the OOB error ranges from 0 to 0.375 with a mean error of 0.18 (SD = 0.073). With eleven specimens per species, the error ranges from 0.005 to 0.036 with a mean error of 0.019 (SD = 0.008). The decrease in OOB error goes nearly into saturation for n > 10. For further analyses, we chose n = 6 because the results show a strong decrease in OOB-error variability and a strong decrease in maximum OOB error at this point.
Standardization of data processing
Different steps throughout data processing can have a severe impact on classification results. The effect of changing the different data processing steps was evaluated using the RF OOB error as an indicator. For each data set a RF model was trained and the OOB error recorded (Supplementary Fig. 2 ). Whereas alteration of baseline subtraction iterations generally only had little impact on RF OOB error, changing half window size (HWS) and the signal-to-noise ratio (SNR) for peak picking had greater effects (Supplementary Fig. 1 ). The generalized additive model (GAM) applied to find the most influencing factor shows that the OOB error is significantly influenced by alteration of the HWS (Table 1 , p-value: 0.007) and SNR (Table 1 , p-value: 0.001). A combination of 22 baseline estimation iterations, HWS of 7 and SNR of 3 resulted in the lowest OOB error of 0.032. These settings were used for further analyses.
Classification success
Finally, we tested the identification success based on MALDI-TOF MS data for each specimen in the data set by excluding the respective specimen and using the remaining reference data to identify it.
Overall, 93% of the specimens (n = 775) were identified correctly and 86% (n = 721) were accepted as correctly classified by the post-hoc test (Fig. 2 A). Identification for specimens of the classes Ascidacea, Teleostei, Elasmobranchii, Echinoidea, Ophiuroidea, Asteroidea, Bivalvia and Gastropoda resulted in success rates of more than 90%. For classes Cephalopoda and Thecostraca the identification success was still above 85%. Success rates lower than 80% were not recorded. Of the 61 misclassified specimens, 15 were assigned to the false species and recorded as correct identifications by the post-hoc test. Of all misclassified specimens, two were assigned to congeneric classes and rated as true positives by the post-hoc test, meaning these would have been misclassified and remain unrecognized.
Case study: cryptic species
In the present data, the identification of the starfish Astropecten irregularis (Pennant, 1777) specimens from the North Sea serves as an example for closely related species that are still distinguishable by proteomic fingerprinting. In a previous study, this morphotype was found to consist of two major genetic clades with inter-clade distances in COI of up to 12%. Morphological differences were not determined so far. Both groups show different distribution patterns with overlaps 30 . Our data included specimens of both clades, A. irregularis 1 (n = 8) and A. irregularis 2 (n = 27).
Data processing settings were optimized for the sub-set of data (HWS = 9 and SNR = 8). Within a RF model produced from the data, a clear distinction between the two genetic groups was possible. None of the specimens was misassigned to the respective other group. This RF model was also used to find the most important variables for differentiation of the two groups using the Gini index, which shows the degree of dissimilarity of the respective variables 31 . The 30 most important variables are given in Fig. 3 A. Whereas all peaks can be found in specimens of both groups, the intensities differ strongly allowing a clear differentiation of A. irregularis clades using proteome fingerprinting.
Case study: sex determination
In previous research it was shown that sex determination may be possible in some species by analyzing the proteomic fingerprint 13 , however the data was not analyzed any further therein. In depth analyses emphasize these findings and show sex-specific protein patterns in the crustacean copepod Euterpina acutifrons (Fig. 3 B). Mass peaks such as m/z 2523, 2929 and 7417 are female specific and not found in any of the male specimens. Others however, predominantly occur in male specimens (m/z 3638, 3719). Further mass peaks are evenly observed in measurements from both sexes but show intensity-pattern differences.
Phyla and class models for identification
If a species is not part of a reference library, it may be desirable to obtain a higher level classification. To test if this is possible based on MALDI-TOF mass spectra of metazoans, species were systematically taken out of the RF training data set and classified with a RF model that was trained on higher taxonomic level but does not include any information on the respective species to be classified. Regarding all phyla together, a classification success of 81% (77% true positive rate (tpr)) was achieved with phyla-wise success rates ranging from 73% (64% tpr) in Echinodermata to 95% (92% tpr) in Chordata (Fig. 2 B). On class level the combined success rate was 72% (66% tpr) ranging from 7% (0% tpr) in Polyplacophora, for which only two species were included in the data set, to 96% (94% tpr) in Teleostei.
For 31 taxa (n = 324), a congeneric species was included. Thus, it was tested if species have a higher affinity to be classified as a congeneric species in case the respective species is removed from the training data. Of these 31 taxa, 30% of specimens were classified as a congeneric species. | Discussion
The aims of this study were (1) to evaluate the wide applicability of proteomic fingerprinting for species identification in marine science across different metazoan phyla and classes, (2) to identify critical steps in sample preparation and data processing, and (3) to contribute to the development of standard procedures and best practices for MALDI-TOF MS based metazoan classification in rapid biodiversity assessments. The general applicability to metazoans has been proven before 8 , 9 , 13 , 32 – 36 . However, here we show for the first time the applicability of this method to a large taxonomic range using a comprehensive data set with an overall species identification success rate of 93%.
Similar high identification success rates on species level were observed for individual metazoan groups 20 , 27 , 36 – 39 . Additionally, our results show that specimens absent from the reference library will be assigned to the correct phyla or class with a high probability implying some kind of phylogenetic signal on higher taxonomic level as was already reported for congeneric Drosophila before 40 . Testing if species would be classified as a congeneric species in the absence of the actual species was less promising in our study with only 30% of specimens being assigned to a congeneric species. This complies with other studies that only show occasional similarity of congeneric species e.g. in cluster analyses but without consistency across all congeneric species 11 , 13 , 26 .
In closely related species, morphological identification can often be complicated. Using proteomic fingerprinting, these problems can however be resolved as indicated by the analysis of the A. irregularis complex. Even though mass spectra show high similarities, distinct patterns of peak presence and absence as well as pronounced differences in relative peak intensities serve as good markers for species identification. Beyond mere species identification, the example of E. acutifrons shows the power of the method to differentiate specimens even on a sex level. This has been shown before for e.g. the fish species Alburnus alburnus (Linnaeus, 1758) 35 . Whereas authors focused on presence and absence of peaks, we were able to show that also relative intensities of certain mass peaks play an important role in differentiation of sexes. Prior studies on larger planktonic copepods have also shown a great potential for differentiation of developmental stages based on a proteomic fingerprint 17 .
Finally, we have shown the necessity of comprehensive reference libraries. Low numbers of specimens per species in reference libraries fail to provide sufficient information on species specific mass spectra features and intraspecific variability. Only with around nine to ten reference specimens per species, the identification error stabilizes on a constantly low level. This supports findings by Rakotonirina et al. 27 who found an increase of identification score with increasing numbers of available main spectrum patterns. In general we would recommend to use more than three specimens per species and preferably to include around ten specimens for every species in a reference library.
MALDI-TOF MS can be used as a universal method for species identification of metazoan species. Due to the short preparation time, low costs 3 , 41 and high identification success it can be a valuable tool in biodiversity assessments replacing time-intense morphological identification or costly DNA barcoding. Especially in cases of closely related or very similar species it can foster a rapid identification. The applicability of proteome fingerprinting for the differentiation of cryptic species was already shown and even in cases of morphologically very similar species, still differences were found 19 , 42 .
Tissue samples used in this work were obtained from specimens stored between seven to 12 years under partly unknown storage conditions. We assume working with fresh or recently fixed material would have resulted in even higher identification success rates. This is supported by the high mass spectra quality obtained from fish species, which were extracted and put into freezer storage almost immediately after sampling (personal communication Knebelsberger). The adverse effect of fixation and storage on resulting mass spectra quality in metazoans was investigated several times and supports this assumption 27 , 43 . We received good results for storage at − 20 °C and also for long-term storage at − 80 °C, thus we recommend cold storage of samples at − 20 °C, until further systematic analyses will specify threshold temperatures for short- (months) or long-term (years) storage.
Our tests have shown that sample concentration is pivotal to obtain good quality mass spectra. While too low sample/matrix ratios will result in lower intensities and a higher baseline, too much tissue will increase the noise in the data and result in unsuccessful measurements. For all investigated taxa, the same sample preparation method was used; however attention must be paid to the correct ratio of matrix and compound to be analyzed. This allows the wide application of this method without adaptation of the protocol to a certain species as it would be necessary for methods such as COI barcoding where certain groups would need highly specific sets of amplification primers 44 , 45 and adjustment of PCR settings. We expect that mass spectra quality could be further improved with more elaborate preparation protocols. This would however counteract the advantage of this method being rapid, user-friendly and straightforward compared to other methods such as COI-barcoding. A critical aspect for the future establishment of this method is also the development of objective evaluation criteria for the sufficient quality of a spectrum for species identification and the procedures to analyze it. Such evaluation methods will be necessary to ultimately facilitate the integration of numerous species spectra into cross-laboratory databases.
Much effort is put into optimizing mass spectra quality by adjusting different preparation protocols 46 , 47 or developing methods for steps such as baseline correction, smoothing or peak picking 48 , 49 . Methods are adjusted either to increase classification success or to obtain better mass spectra reproducibility. Here, we tested the influence of certain steps during data processing on classification success focusing on the important steps for peak detection. Whereas baseline subtraction and adjustment of a SNR value both aim at reducing noise within the data, adjusting the HWS influences the peak picking resolution. Thus, by decreasing the HWS during peak detection, the number of peaks will increase as the highest peak within the HWS will be the detected. This will result in peaks of very similar size being recognized as distinct peaks, rather than being put together in a single bin. This does also explain the high effect of both parameters SNR and HWS compared to baseline subtraction. Baseline subtraction is constrained towards reducing instrument-dependent noise. Adjustment of the SNR value will however, like HWS alteration, affect the number of more dominant peaks and thus the general resolution of the mass spectra. Hence, more species-specific information is retained and more information is available for classification. Based on our results, rather than testing all variables, adjusting SNR and HWS should be adequate to optimize the data pipeline. However, it needs to be emphasized that this pipeline aims at optimizing species identification and may not be adequate for investigation of intraspecific variability as was shown elsewhere 16 .
In summary, we propose a workflow applicable for any metazoan species or tissue sample to be identified: A comprehensive reference library is needed with species level identification by morphological or molecular approaches (Fig. 4 ). In the lab, a small tissue (up to 1 mm 3 ) is retrieved and incubated for at least 5 min in the HCCA-matrix solution. Of the resulting extract, 1 to 1.5 μl are transferred to a target plate for measurement. Data processing is carried out in R (Fig. 4 ). Mass spectra quality is done by eye and supported by R-packages such as MALDIrppa 50 . Finally, based on previously assessed species identification, data processing can be optimized to obtain ideal settings for classification. Depending on our results this can be narrowed to adjustment of HWS- and SNR-value. Based on the reference library, a RF model can be calculated for specimen identification (Fig. 4 ). Applying a post-hoc test will provide further support for the identification. If classification is not well supported, a RF model on class or phyla level can be applied to obtain higher-level classification. | Conclusion
MALDI-TOF MS was proven an easy to apply, cost-effective and time-saving tool for identification across taxa. It is especially feasible in applications where mere species identification is desired, for example in biodiversity assessments By the standardized workflow based on a wide range of marine metazoan specimens can be identified quantitatively and effectively on species level thereby bypassing some of the high requirements associated with genetic methods, such as access to special laboratories, searching for primers etc. We want to highlight here that proteomic fingerprinting will be due to its simplicity, reliability and efficiency a valuable supplement to the molecular toolbox for taxonomy. | Proteomic fingerprinting using MALDI-TOF mass spectrometry is a well-established tool for identifying microorganisms and has shown promising results for identification of animal species, particularly disease vectors and marine organisms. And thus can be a vital tool for biodiversity assessments in ecological studies. However, few studies have tested species identification across different orders and classes. In this study, we collected data from 1246 specimens and 198 species to test species identification in a diverse dataset. We also evaluated different specimen preparation and data processing approaches for machine learning and developed a workflow to optimize classification using random forest. Our results showed high success rates of over 90%, but we also found that the size of the reference library affects classification error. Additionally, we demonstrated the ability of the method to differentiate marine cryptic-species complexes and to distinguish sexes within species.
Subject terms
Open Access funding enabled and organized by Projekt DEAL. | Supplementary Information
| Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-024-51235-z.
Acknowledgements
The authors thank Michael Raupach, Thomas Knebelsberger, Andrea Barco and their collaborators, students and technicians for bringing the tissue database into life and good documentation of their work as well as making their work publicly available. Respectively, this work was supported by the following grants: The Federal Ministry of Education and Research (Grant No. 03F0499A) and the Land Niedersachsen. This is publication 14 of the Senckenberg am Meer Proteomics Laboratory. This study was supported by the DFG initiative 1991 “Taxono-omics” (Grant Number RE2808/3-1/2). HIFMB is a collaboration between the Alfred-Wegener-Institute, Helmholtz-Center for Polar and Marine Research, and the Carl-von Ossietzky University Oldenburg, initially funded by the Ministry for Science and Culture of Lower Saxony and the Volkswagen Foundation through the ‘Niedersächsisches Vorab’ Grant program (Grant No. ZN3285). We are grateful for silhouette images provided under CC0 1.0 Universal Public Domain Dedication license by PhyloPic ( https://www.phylopic.org/ ).
Author contributions
S.R., J.P., S.L. and P.M.A. conceived the study. H.N. carried out the majority of morphological species identifications. S.R., N.C. and A.E. carried out MALDI-TOF MS measurements. S.R. and J.P. analyzed the data and wrote a first manuscript draft. All authors significantly participated in critical revision of the manuscript draft.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Data availability
All mass spectra is available at Data Dryad (10.5061/dryad.7pvmcvdzf). Relevant R-Scripts are stored alongside the raw data.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:54 | Sci Rep. 2024 Jan 13; 14:1280 | oa_package/ce/70/PMC10787734.tar.gz |
PMC10787735 | 38218913 | Introduction
Cementless Oxford mobile-bearing unicompartmental knee arthroplasty (OUKA) reportedly achieves comparable clinical results compared with cemented OUKA, with markedly reduced radiolucent lines 1 . However, medial tibial plateau fracture is a serious complication after cementless OUKA. Its occurrence after OUKA has typically been attributed to technical errors 2 – 4 . The keel of the tibial component was shown to play an important role in fracture occurrence, and a shorter distance between the keel of the tibial component and the cortex (keel-cortex distance; KCD) was associated with an increased risk of fracture 5 .
Fractures are more common in Asian countries (3.8–8.0%) 6 – 9 than in non-Asian countries (< 1%) 10 – 12 , perhaps due to the high prevalence of constitutional varus 13 in Asian patients 14 . Patients with proximal tibial vara also have high prevalence of fractures 8 , 9 . The KCD was recently reported to be shorter in patients with proximal tibial vara 5 . Varus placement has been commonly implemented in fixed-bearing UKA; it can avoid the stress concentration and potentially prevent failure 15 , 16 . Slight varus placement of the tibial component could therefore be an effective procedure in OUKA and might widen the KCD, thus decreasing the risk of fractures. However, the effect of varus placement on the KCD has not yet been evaluated. This simulation study uses 3D-CT to evaluate KCDs in relation to the tibial component varus/valgus alignment. We hypothesised that the KCD is longer when the tibial component is in varus placement compared with perpendicular or valgus alignment, even in the proximal tibial vara. | Materials and methods
This research has been approved by the institutional review board of Takatsuki general hospital (No. 2020-14). All methods were carried out in accordance with relevant guidelines and regulations. We studied 51 unilateral lower limbs in 51 consecutive patients that underwent primary OUKA in our hospital between February and April 2020. There were 39 women and 12 men (mean age 71.7 ± 6.9 years, mean body mass index 25.3 ± 3.6 kg/m 2 , HKA 7.9° ± 5.5° in varus). As this is an observational study without patient invasion or intervention, we did not obtain consent directly from each patient. The need for informed consent was waived by the ethics committee of Takatsuki general hospital. However, we disclosed the purpose of the study and information as an opt-out, and guaranteed the opportunity to refuse participation in accordance with the ethic committee in Takatsuki general hospital. All were diagnosed with anteromedial osteoarthritis 17 and selected according to previously described guidelines 18 . Flexion contracture of the knee was < 15°, and the HKA angle was < 15° in all knees.
Measurement of the keel-cortex distances and tibial component coverage
As the routine examination, whole-leg CT scans were performed in every patient with 2 mm thick slices using Aquilion ONE (Toshiba, Tokyo, Japan). Patients were positioned on the table in a supine position. Scanning were performed from the hip to the ankle joint with the patient in a knee-extension with the patella facing upward. The obtained image datasets were imported into 3D multiplanar reconstruction image simulation software (ATHENA; Soft Cube, Osaka, Japan). This system has computer-aided design (CAD) data of various implants, including OUKA, and can accurately assess the implant position relative to the bone landmark in TKA 19 . The tibial mechanical axis (TMA) passed through the centre of the medial and lateral tibial eminences and the centre of the talar dome. The tibial AP line connects the middle of the PCL and the medial border of the patellar tendon attachment to the tibial tubercle, as described previously 20 . The proximal tibial articular surface was cut perpendicular to the TMA with a posterior inclination of 7° and 4 mm below the medial joint lines. The cutting line at the articular surface was determined to be parallel with the tibial AP line through the tip of medial intercondylar eminence, which is accessible in the small operating field in medial UKA. Next, KCD and over-coverage of the tibial component were evaluated when the component was set perpendicular (neutral), 3° valgus (valgus3), 3° varus (varus3), and 6° varus (varus6) to the TMA. The rotational centre of the tibial component varus/valgus alignment was set at the tip of medial intercondylar eminence (Fig. 1 ). Four KCDs were measured; anterior, anteromedial, posterior and posteromedial (Fig. 2 A). Oxford partial knee component size (Zimmer Biomet, Warsaw, IN) was selected based on the medio-lateral dimension of the tibial cutting surface so that medial edge of the component was flush, but never with undercover of the medial tibial cortex. Under- and overhang of the anterior part within 3 mm was tolerated. The amount of over-coverage was measured by calculating the area within the enclosed line, as shown in Fig. 2 B. Osteophytes were excluded from the measurement range.
Measurement of the tibial morphology
All preoperative weight-bearing radiographs were obtained one month before surgery according to a previously reported standardised protocol 21 . Briefly, the patella was placed forward, with the ankle in the neutral position. Patients were instructed to stand upright with extended knees with both heels and hallux in contact with the floor. Tibial morphology was assessed with the medial eminence line (MEL), as previously described 8 , 9 . The MEL was drawn passing through the apex of the medial intercondylar eminence and parallel to the tibial anatomical axis (TAA). The TAA was defined as a line connecting the centres of the proximal 1/3 (p1/3) and distal 1/3 (d1/3) of the tibia 22 . If the MEL passed lateral to the medial cortex of the tibia, the tibia was classified as ‘intramedullary’, and the medial condyle was considered to be normal shape (Fig. 3 A). Otherwise, if the MEL passed medial to the medial cortex, it was classified as ‘extramedullary’, and the medial condyle was considered to be very overhanging (Fig. 3 B) 9 . In addition, the proximal tibia vara angle (PVA) was evaluated to assess medial bowing in the proximal tibia with the AP radiographs of the lower extremity, according to the previous article 22 . The PVA was defined as the angle between the TAA and the line connecting the centre of the tibial eminence (CE) and the midpoint of the proximal 1/3 of the tibia (Fig. 3 C).
Statistical analysis
Intraclass and interclass correlation coefficients (CC) were calculated to examine the reproducibility of the measurements. All measurements were performed twice by one surgeon and once by another examiner. CCs for intra- and inter-observer reliability were > 0.81 (range 0.81–0.96) for all measurements (Table 1 ).
All values are reported as mean ± standard deviation (SD). Results were analysed using StatView 5.0 (Abacus Concepts Inc., Berkeley, CA, USA). All parameters were normally distributed. Spearman’s rank correlation analysis was used to assess the correlation of PVA with the KCDs and amount of over-coverage. The KCDs in each region and amount of over-coverage were compared between two groups (extramedullary and intramedullary) using unpaired t-tests. They were compared using repeated-measures ANOVA with within-factors (neutral, valgus3, varus3, varus6) in both groups (extramedullary and intramedullary) using Bonferroni correction. Additionally, to investigate the benefit of varus placement over neutral placement, the difference in KCDs between varus placement (varus3 and 6) and neutral placement were compared between extramedullary and intramedullary groups using unpaired t-tests.
Post-hoc power analysis was performed using G*Power 3 23 . For repeated measures ANOVA with within-factors, the study is expected to provide the power (1 − β) of 0.99 and 0.81 for detecting an effect size (f) of 0.3 with type-I error (α) of 0.05, in intramedullary (n = 34) and extramedullary groups (n = 17), respectively. For unpaired t-tests, the effect size was calculated using means and SDs based on the Hedges’ g for each parameter and a 95% confidence interval (CI) for effect sizes 24 .
Ethical approval
This research has been approved by the IRB of the authors’ affiliated institutions. (2020-14). | Results
For all subjects, significantly shorter KCDs and larger over-coverage in valgus3 were found compared with the others (neutral, varus3, and varus6) ( P < 0.0083 after Bonferroni correction; Table 1 ). Posterior KCDs showed lower values in neutral compared with varus3 and varus6 ( P < 0.0083 after Bonferroni correction; Table 2 ).
There were 34 patients (67%) in the intramedullary group and 17 patients (33%) in the extramedullary group. No statistically significant differences were noted in terms of age, sex, BMI, preoperative coronal alignment, and maximum flexion angle (Table 3 ). However, there was significantly higher PVA in the extramedullary group than in the intramedullary group (6.8 ± 2.8° vs. 3.1 ± 4.3°, P < 0.001, Hedges' g = 0.94, 95% CI 0.41 to 1.56).
Correlations between PVA with KCDs and over-coverage
PVA showed significant negative correlations with posterior and posteromedial KCDs for all within-factors. High correlations were found between PVA and the amount of over-coverage in a neutral position (Table 4 ).
Comparison of KCDs and over-coverage between extramedullary and intramedullary groups
Comparison between the groups is shown in Table 5 . The anterior and anteromedial KCDs showed no significant difference between the groups for all within-factors. However, the posterior and posteromedial KCDs were significantly lower in the extramedullary group than in the intramedullary group for all within-factors (valgus3, neutral, varus3, and varus6). The amount of over-coverage was significantly larger in the extramedullary group than in the intramedullary group when the tibial component was set in a neutral position.
Comparison of KCDs and over-coverage within-factors (valgus3, neutral, varus3, and varus6) in both extramedullary and intramedullary groups
Significantly shorter KCDs in valgus3 was found compared with the others (neutral, varus3, and varus6) in both groups ( P < 0.0083 after Bonferroni correction; Fig. 4 ). Posterior and posteromedial KCDs had lower values in neutral than in varus3 and varus6 ( P < 0.0083 after Bonferroni correction; Fig. 4 ).
Regarding the amount of over-coverage, in the intramedullary group, there was a significantly higher value in valgus3 than in the others (neutral, varus3, and varus6). In the extramedullary group there was a significantly higher value in valgus3 than in varus3 and varus6. Additionally, the amount of over-coverage was significantly higher in neutral than those in varus3 and varus6 ( P < 0.0083 after Bonferroni correction; Fig. 5 ).
Comparison of difference in KCDs from neutral to varus placement (varus3 and varus6) between extramedullary and intramedullary groups
Significantly larger differences were found in KCDs from neutral to varus3 and varus6 in extramedullary group compared with the intramedullary group (varus3 minus neutral; 1.15 ± 0.86 mm vs. 0.70 ± 0.65 mm, P = 0.03, Hedges' g = 0.62, 95% CI = 0.04 to 1.23. varus6 minus neutral; 1.54 ± 1.01 mm vs. 0.97 ± 0.88 mm, P = 0.04, Hedges' g = 0.61, 95% CI = 0.02 to 1.21). | Discussion
In this 3D simulation study, the KCD was longer and the over-coverage was smaller in accordance with varus implantation. Meanwhile, a valgus implantation shortened the KCD and increased the amount of over-coverage. Based on these results, a varus implantation seems to be beneficial in maintaining sufficient KCD that might decrease the risk of fracture. These results confirmed our prior hypothesis. This is the first study to describe the effects of tibial component coronal alignment on KCDs and bony coverage for OUKA, which could be informative for surgeons in preoperative planning.
Posterior and posteromedial KCDs were shorter in patients with overhanging medial tibial plateaus, which is similar to the previous findings 5 . In addition, larger over-coverage was observed in patients with overhanging medial tibial plateau than those without. PVA showed significant correlation with the posterior and posteromedial KCDs. We also found higher PVAs in the extramedullary group than in the intramedullary group. Qualitative assessment using MEL classification thus reflects a proximal tibial vara and is a simple and useful means of predicting fractures. Overhanging medial tibial plateau reportedly has a higher risk of fracture 5 , 8 , 9 , so slight varus alignment of the tibial component is especially recommended in such knees. This information may be helpful when preparing the keel slot.
In both extramedullary and intramedullary groups, all KCDs were significantly lower when the tibial component was set in 3° valgus relative to the tibial AP axis than when set in a neutral position and 3° and 6° varus relative to it. There was a larger amount of over-coverage when the component was in valgus alignment than when it was set in neutral or varus alignment. This suggests that the valgus alignment of the tibial component decreases the bone mass supporting the tibial components and may be a risk factor for fractures in OUKA. Previous studies using the finite-element model demonstrated a significant increase of strain on the medial aspect of the proximal tibia following UKA in the setting of valgus implantation of tibial components 25 , 26 . Moreover, valgus implantation of the tibial component seems to cause deterioration of the coverage. Surgeons should therefore avoid the valgus implantation of the tibial component in OUKA.
Posterior and posteromedial KCDs were shorter in neutral than in 3° or 6° varus. In addition, increases in KCDs from neutral to 3° or 6° varus were significantly larger in the extramedullary group than in the intramedullary group. Regarding component coverage, the extramedullary group had significantly larger over-coverage than in the intramedullary group when the tibial component was set in a neutral position. Furthermore, in the extramedullary group, the amount of over-coverage was significantly larger in neutral alignment compared with 3° and 6° varus. Implantation in slight varus alignment seems to provide an advantage for the surgeon because of increased bony support under the tibial tray and achieving adequate component coverage, especially for patients with overhanging medial plateaus who are at high risk of posterior tibial cortical damage. The benefits of a slight varus alignment of the tibial component on joint line preservation, natural knee kinematics and better clinical outcomes have been reported 15 , 16 , 27 . The optimal target should therefore be slight varus alignment instead of placement perpendicular to the mechanical axis, especially in patients with medial overhanging tibias. However, the traditional extramedullary alignment resection guide was designed to cut perpendicular to the mechanical axis, so it is difficult to cut the proximal tibia accurately in a slight varus alignment without navigation or patient-specific instrumentation. Hiranaka et al. developed a new slidable fixator instead of the standard fixator to set the extramedullary rod on the leg. This enables an intentional varus tibial cut for OUKA 28 . This technique could be a simple and useful alternative means of obtaining an intentional varus tibial cut in OUKA.
This study has a number of limitations . First, due to the nature of this simulation study, actual postoperative cases were not examined. Such cases should be examined to seek the direct association of coronal alignment, shorter KCDs and postoperative fractures and to check that varus implantation is not a trade-off for inferior long-term implant survival. Nevertheless, the primary purpose of this study was to investigate the effect of the varus/valgus alignment of the tibial component on the KCD and bony coverage. A simulation study adjusts for confounders (component positions in sagittal and axial plane) influencing the KCD and bony coverage, which could not be adjusted for in actual postoperative cases. A second limitation of this study is that tibial component size is often chosen based on the AP diameter of the tibial cut surface, however in this study tibial component is chosen to minimize the medial side overhang, since the association of medial overhang with poor clinical outcome and postoperative pain have been reported 29 . This difference in size selection may lead to different results. Finally, our study population was limited to Japanese patients undergoing UKA. Differences in the shape of the tibia have been reported in differing ethnicities 30 , 31 . As tibial fracture is substantially common in Japan, however, the information might be important in Japanese patients and maybe in other Asian ethnicities with frequently reported overhanging medial tibial plateau. | Conclusions
In OUKA, varus implantation increased the KCD and this may decrease the risk of fracture, even in knees with overhanging medial condyle. By contrast, the KCD is shortened by valgus alignment of the tibial component, which increases over-coverage, so this alignment should be avoided. | A short keel-cortex distance (KCD), especially to the posterior cortex, is a potential risk factor for tibial plateau fracture after Oxford mobile-bearing unicompartmental knee arthroplasty (OUKA). This study aimed to evaluate the effect of tibial component alignment in the coronal plane and tibial proximal morphology on the KCD. Included in this study were 51 patients scheduled for primary Oxford medial unicompartmental knee arthroplasty (UKA). The anterior and posterior KCD were preoperatively assessed using 3D simulation software with the component set perpendicular to the tibial mechanical axis (neutral), 3° valgus, 3° varus, and 6° varus, relative to neutral alignment. We evaluated the existence of overhanging medial tibial condyle where the medial eminence line, the line including the medial tibial eminence parallel to the tibial axis, passes outside of the tibial shaft. In all component alignments, patients with a medial overhanging condyle had significantly shorter posterior KCD than those without. In patients with a medial overhanging condyle, the posterior KCD significantly increased when the tibial component was placed in 3° varus (4.6 ± 1.5 mm, P = 0.003 vs neutral, P < 0.001 vs 3° valgus) and 6° varus (5.0 ± 1.4 mm , P < 0.001 vs neutral, P < 0.001 vs 3° valgus) compared with in neutral (3.5 ± 1.9 mm) or 3° valgus (2.8 ± 1.8 mm). In OUKA, varus implantation increased the KCD. This could potentially decrease the risk of fracture, even in knees with the overhanging medial condyle. Conversely, valgus implantation of the tibial component shortened the KCD, and should therefore be avoided.
Subject terms | Acknowledgements
The authors would like to thank Mr. Benjamin Phillis at the Clinical Study Support Center, Wakayama Medical University for proofreading and editing.
Author contributions
T.H. and R.K. conceived of the research idea. T.K. and T.H. designed the study. T.H. instructed T.K. and Y.S. on measurement approaches for data acquisition. T.H., T.F., and K.O. collected patient data. T.K. took the lead in writing the manuscript with input from T.H., and T.M. All author reviewed manuscript.
Data availability
A data set will be available by contacting the corresponding author.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:54 | Sci Rep. 2024 Jan 13; 14:1274 | oa_package/75/94/PMC10787735.tar.gz |
|
PMC10787736 | 38218987 | Introduction
Recently, there has been an increase in the sales and consumption of herbal supplements for complementary health 1 . However, there are gaps in the current understanding of the safety concerns from the use of herbal or natural products (NPs), including adverse effects from the NPs and from potential NP-drug interactions that can occur due to the co-consumption of NPs and pharmaceutical drugs 2 . For example, NPs such as garlic, green tea, and ginseng can modify the effect of the prescription anticoagulant warfarin, either potentiating or reducing its efficacy leading to an increased risk of bleeding or stroke from blood clots, respectively 3 – 5 . By natural products, we refer to products consisting of complex chemicals produced by living organisms. Our current focus is on botanical products intended for human consumption. The constituents of these products may interact across multiple biological systems in complex ways to contribute to their effects 6 .
A promising approach to assess safety concerns for NPs is a retrospective pharmacovigilance analysis of adverse event reports from spontaneous reporting systems such, as the FDA Adverse Event Reporting System (FAERS) 7 , 8 . A major challenge in pharmacovigilance for NPs is the need for more standardization for coding events involving NPs. The lack of standardization in adverse event reports related to NPs leads to challenges in parsing and identifying the products' names and ingredients due to their non-uniform representation in the reports 8 , 9 . Therefore, researchers often encounter unfamiliar NP names or spelling variations when identifying reports for pharmacovigilance 2 . For example, the FAERS database includes more than forty-four (44) names referring to "Licorice" including "Liquorice", " Glycyrrhiza glabra ", and " Glycyrrhiza laevis. "
Equation ( 1 ) Gestalt Pattern-Matching ( )
Longest Common Substring between
Subsequent Common Substring between
Equation ( 2 ) Normalized Levenshtein Distance ( )
Previous work has used fuzzy string-matching to overcome this limitation 10 . This approach helps mitigate the effects of similar name variations and misspellings but does not fully bridge the gap between the spectrum of names referring to the same product 8 , 10 ; such as matching the common name “Liquorice” to its equivalent Latin binomial name “ Glycyrrhiza glabra ”.
Equation ( 3 ) Cosine Distance ( )
To address these shortcomings, we propose combining fuzzy string-matching and deep learning to broaden the capture of candidate NP names. A combination approach can leverage both the reliability of fuzzy string-matching and the flexibility of deep learning to identify both spelling variations and alternative names for a given product name. For example, given a misspelled form of Licorice, such as "Likorice", the model will be able to map it to its Latin binomial name, " Glycyrrhiza glabra " and its other species by outputting a small distance between them. For this work, we utilized Gestalt pattern-matching 11 (GPM) as the fuzzy string-matching component to maximize the identification of candidate spelling variations (Eq. 1 and Fig. 1 ). The proposed deep learning approach relies on the cosine distance (Eq. 3 and Fig. 2 ) between learned embeddings to create a model that matches NP names. The deep learning approach is based on the Siamese model (SM) architecture (Fig. 3 ). The SM architecture facilitates learning the embeddings by comparison of the inputs through the contrastive loss function (Eq. 4 ). The Siamese neural network was chosen for this task because they have been shown to successfully address the challenge of identifying similarities over a considerable range of problems 12 . Given an unknown term and a set of alternatives, the model learns to embed the inputs to minimize the cosine distance between terms that are spelled similarly or that are semantically similar. Additionally, they have been successfully trained with relatively little data 13 .
Equation ( 4 ) Contrastive Loss ( )
We also explored Levenshtein Edit Distance (LED) as another form of fuzzy string-matching. LED’s algorithm (shown in Eq. 2 ) presents a way to quantify the number of edits necessary to transform a query sequence into a target sequence by recursively comparing the characters in each position of the sequence. We opted not to include LED in the experiment seeking novel spelling variations for the following reasons. First, LED is a default fuzzy string-matching algorithm in many query systems, meaning the variations it could identify might already be present in the query set. Second, the results from the comparison experiment indicated that LED was comparable to GPM. And third, including LED in the novelty experiment would increase the burden on the team performing the manual validation with terms that we would expect to have a high overlap with the results from GPM. | Methods
Data collection
The first data source was the Center for Excellence for Natural Product-Drug Interaction Research (NaPDI) Database, from which we collected the known product names of several NPs, some of the previously identified spelling variations, and their corresponding Latin binomial names 14 . A second data source was the FAERS database, from which we identified additional product names or spelling variations using fuzzy string-matching for 70 different NPs 7 . FAERS data from Q1 2004—Q2 2021 was loaded into a standardized database and manual annotation was used to map 5,358 drug name strings from adverse event reports that matched to NP names. The remaining 389,386 unmapped drug names from FAERS were used for the novelty experiment in this study.
Experiments
The data was used to train and evaluate the Siamese model (SM) by conducting several experiments to study the effectiveness of the SM at matching potentially relevant terms from the reports to the corresponding NP names. We initially explored the SM’s performance as a distance metric to relate NP names effectively. Then, we evaluated how the SM compared to fuzzy string-matching approaches in tackling the same problem, validating that the SM can match novel names or spelling variations from FAERS to the correct equivalent group of NPs. Finally, we combined both approaches to produce a set of candidate NP names to be manually validated and utilized during FAERS report collection.
Data pre-processing & inclusion criteria
The training data consisted of pairs of spelling variations of the product names from the manual annotation and a distance label where "1" indicated distant terms and "0" indicated matching terms. An example row of a positive matching pair might be ("Likorice", "Liquorice", 0) and a negative matching pair ("Cinnamon", "Liquorice", 1). This representation allows the Siamese Model to learn the associations between query and target terms and represent the associations as a distance between 0 and 1. For simplicity, we decided to reduce the variation across terms. To this end, the data was standardized such that any non-alphabetical characters were removed from the terms, with the only exception being the whitespace character. All characters in the terms were then capitalized. Due to limitations of the implementation of Keras’ Embedding Layer 15 , a fixed-sized cutoff for the maximum length of the terms is required, and inputs must be represented as positive integers. We chose our cutoff by choosing a number close to the sum of the average size of the terms in the data (30) plus one standard deviation (31). Therefore, terms longer than sixty-five (65) characters were discarded. The last step in this initial processing was to encode the terms into integer sequences, where each letter was mapped to its corresponding position in the English alphabet, so [A-Z] became [1–26], and the space character was mapped to the integer 27. For sequences smaller than the sixty-five (65) maximum size cutoff, 0-padding was used to pad the rest of the sequence up to the sixty-five (65) elements.
Through exploratory data analysis, we identified two sources of imbalance in our data. We found some target labels were disproportionally represented in the data and discovered that there was an additional imbalance in the proportion of matching to non-matching sequences. After this initial data processing was done, two data balancing steps were performed to reduce label imbalance. First, we balanced the representation of each target name to approximately the same amount since no target label should be overrepresented in the dataset. The additional pairs were generated by using any names matching the target name; the names were modified by adding random modifications to the query term to create new unique pairs. These random modifications were performed by first randomly selecting 40% of the characters in the query sequence, then for each of these characters a random sample was drawn from a standard uniform (0,1) distribution, the random sample determined the modification to be performed. If the sample was in the interval [0.0, 0.2), the character in that position was replaced with a new random character [A-Z] or space, if the sample was in the interval [0.2, 0.4), the character in that position was removed, if the sample was in the interval [0.4, 0.6), one random character or space was added after that position, if the sample was in the interval [0.6, 0.8), the character was transposed with the one in the previous position, and finally, if the sample was in the interval [0.8, 1.0], no modification was performed to that position. The second balancing step was similar, in that it generated matching and non-matching pairs as necessary to balance the total number of matching and non-matching pairs in the complete dataset. After the balancing procedures were completed, the 70/30 train-validation split, and a separate test/holdout set were created. A description of the number of samples in each of the sets is provided in Table 1 .
Siamese model training
We utilized the SM architecture, as shown in Fig. 1 . A SM comprises two identical neural network towers with the same architecture. In our implementation, each tower is made from 65 recurrent bidirectional Long Short-Term Memory (LSTM) cells 16 . The outputs of the towers were combined using the cosine distance between the vectors of the embedded terms. The contrastive loss function was utilized during training to measure the model's accuracy. The corresponding input to each tower was first embedded into a 30-dimensional space by an embedding network comprised of two (2) layers of hundred-and-thirty (130) dense nodes each. The hyperparameters for the number of dense nodes, embedding dimensions, and the number of layers in the embedding network were chosen experimentally.
Comparison with fuzzy string-matching
To evaluate the model’s usefulness in identifying the correct matching NP name, we compared the model’s performance against a fuzzy string-matching approach. The algorithms utilized for fuzzy string-matching were the Levenshtein edit-distance (LED) (Eq. 2 ) as implemented in TensorFlow’s “edit_distance” and Gestalt pattern-matching (GPM) as implemented in Python’s “difflib” library “get_close_matches” function 11 , 17 . The LED is a metric used for comparing the similarity between two sequences based on their “edit distance.” Gestalt pattern-matching is an algorithm also used to compare the similarity between two sequences. The metric used for comparison was Mean Reciprocal Rank (MRR) 18 (Eq. 5 ), with which we measured the top twenty (20) results predicted to be the most similar to the target value annotated in the dataset. Additionally, we also compared the top results to any of the product names equivalent to the target. These top twenty (20) results are used as candidate NP names to be validated further.
Equation ( 5 ) Mean Reciprocal Rank (MRR)
Novelty experiment
Finally, to evaluate the applicability of our methods for pharmacovigilance research, we extracted 389,385 drug name strings from the FAERS database that were not mapped to any drugs or NP names and might contain NPs. After processing the unmapped names, 7,751 were removed because they were longer than sixty-five (65) characters. Another 41,849 sequences were identified as duplicates and were also removed; the remaining 339,785 were utilized for this novelty experiment to identify unique NP names from unmapped reports. For this experiment, we utilized a subset of 70 NPs of interest (the 70 natural products chosen were mentioned by a 2020 Market Report 19 and/or were of interest to the NaPDI Center) from the set of NP names used for training. This subset contains both the Latin Binomial and a known common name, referred to as the preferred term (PT) for each of the seventy (70) natural product name pairs. These hundred-and-forty (140) names were utilized as a query set to identify candidate mappings from terms found in the FAERS database. We then utilized GPM and SM to match the top twenty (20) unmapped FAERS strings with results predicted as the most similar to (least distant to) the query terms. In this experiment, we explore the combined results of GPM and the SM, LED was not included.
Manual validation
The candidate mappings between the query NP names and unmapped FAERS strings yielded by the novelty experiment were manually annotated by two health professionals to assess whether the candidate mappings were correct. This process aims to leverage their expertise with drug and NP names to validate the results from the model. We further corroborated the annotations through Cohen's kappa interrater agreement metric (Eq. 6 ) and an adjudication process to resolve the points of disagreement 20 .
Equation ( 6 ) Cohen's Kappa Interrater Agreement
Relative observed agreement among raters.
Hypothetical probability of chance agreement | Results
Model training results
After training the SM for up to five hundred (500) epochs, the model terminated early at seventeen (17) epochs (Fig. 4 ). The best-performing epoch in this training run achieved a validation accuracy of 0.97 (validation loss: 0.03). The weights from that epoch were saved and utilized for the rest of the experiments.
A holdout set containing 2500 pairs was utilized to compare the MRR performance. For the MRR evaluation, we were only interested in a subset of the matching pairs (n = 1,000) given that we used the first element of the pair as the query and the second element as an indicator of the correct answer. Using the top 20 NP names reported as the least distance to the query term by each approach, we looked for exact matches to the target pair and matches to terms equivalent to the target pair.
For the exact matching where , the LED approach performed best (MRR = 0.567). In the equivalent matching where , the LED approach also performed best with (MRR = 0.903). In both cases, the GPM approach performed similarly to LED with slightly lower MRR scores (exact = 0.563, equivalent = 0.894.) In both tests, the SM achieved comparably lower MRR scores (exact = 0.438, equivalent = 0.672.) see Fig. 5 and Table 2 .
Novelty results
The single-blind test evaluation showed strong agreement (Kappa = 0.86) between the annotators on the identified candidate mappings. The specificity of the identified terms was the primary cause of disagreements between the annotators. In the presence of disagreements, the rules in Table 3 were utilized for adjudication. After adjudication, evaluators reported that the SM identified 504 correct terms, and GPM identified 595 (Table 4 ). For the 70 NPs of interest, we considered those where one or more correctly identified NPs were covered by the approach (Table 5 ). When comparing these results, the GPM and SM approaches performed similarly, respectively identifying an average of 6 and 5 reports for the products they covered. From this novelty experiment, we were able to identify a total of 158 novel NP names and spelling variations for 70 NPs.
Manual validation
It is worth noting that many of the terms did not overlap between the approaches (Table 6 ). The SM identified 248 unique names, while GPM identified 347. The unique terms obtained from this mapping were incorporated into our quarterly data collection from FAERS data between Q1 2004 and Q2 2022 21 . For mining reports containing mentions of NPs, we only looked at the reports involving the products for which the novel product names were identified; these 57 NPs are a subset of the original 70 NPs of interest. Including the novel terms from the experiments above resulted in the capture of 3,486 additional reports that were not previously identified in the database (Table 7 ). | Discussion
This study combined fuzzy string-matching and Siamese neural network approaches to identify NP names in adverse event reports in the FAERS database and successfully broadened the capture of NP reports by approximately 7.5%. Prior work in using string matching methods to identify NP strings in spontaneous reporting systems have used multiple sources of NP names to create a thesaurus to identify adverse event reports 8 , 20 . This requires maintenance of the thesaurus and regular updates to capture relevant NPs and name variations. This study expands upon the prior work that uses string matching using a manually annotated dataset from the FAERS database that can be used to train the model to identify NP variations. The approach can also be effectively utilized to broaden the capture of reports in other spontaneous reporting systems and overcome challenges in NP pharmacovigilance, including lack of interoperability among NP data sources, lack of coverage of synonyms, scientific names and common names, and ambiguity in NP names in adverse event reports 8 . The manual annotation results showed that both approaches contribute sufficient unique candidate mappings that help increase the number of reports identified in FAERS, which is essential considering that only 0.4% of the reports in FAERS involve NPs. Using a combination of fuzzy string-matching and a Siamese Neural Network, we increased our capture of relevant reports by approximately 7.5%.
Combined approach
We trained a SM to serve as a proxy distance metric for identifying potential spelling variations of NP names. Looking at the results from the training process, it is encouraging to see the potential of the method in tackling the problem of mining emerging variations in adverse event reports. In agreement to previous work that suggests natural language processing approaches can outperform current methods 8 , we expected SM to outperform fuzzy string-matching approaches. During our work, it was clear that this was not the case with our current implementation. Although the approach minimized the distance between similar terms, as seen during the training evaluation, it did not effectively maximize the distance between dissimilar ones, as suggested by the MRR comparison. This may be due to potential overlaps between spelling and semantical similarities of the query and target space.
Potential limitations with the training of the SM includes the completeness of the data, shortcomings of the evaluation metrics, and the generalizability of the methods. Due to the nature of the problem, the data on spelling variations for NPs utilized for training was in no way complete or exhaustive. Our approach to data processing and augmentation lends itself to increasing the model's capacity to generalize novel variations at the risk of saturating and confounding the embedding space. As implemented, the SM is learning two different tasks, one for "denoising" the spelling variations to the preferred term and another for matching equivalent terms as similar. Separating these tasks and creating a model architecture for the specialized handling of each task might prove advantageous. In the current work, the MRR metric only measures the top response and not the results' completeness. Tweaking this aspect of how we measured MRR might provide a more accurate assessment of the applicability of the approaches.
We chose the SM architecture for this work because it possesses the following qualities. It can easily be used for distance metric learning between pairs. Siamese models have been shown to successfully learn distance metrics even with little data. The SM approach was, at most, only comparable to approaches such as LED and GPM. Nonetheless, such an approach proved helpful in mining adverse event reports for mentions of NPs, as seen in the novelty experiment. The novel NP names identified in the novelty test (supplementary material) will help refine the task of mining natural products from adverse event reports (AERs) in the future.
Limitations
We encountered some limitations in our implementation, such as the need for a fixed input size. Since the average length of the name of the NPs considered for the study was thirty (30) characters with a standard deviation of thirty-one (31), we chose a value close to the mean plus the standard deviation for our sequence length cutoff. In turn, the current model targets sequences of up to sixty-five (65) characters, approaches that might enable us to generalize applicability past this threshold are desirable. This means that currently, we cannot process sequences longer than sixty-five (65) characters. A second limitation was identified in the MRR comparison experiment. For the current problem, the orthographical and semantical spaces are not mutually exclusive; overlaps between spelling similarity and semantical dissimilarity and vice-versa can hurt the model's performance. Another limitation of our work is that candidate names were mined for only 70 NPs of interest. Another area for improvement is that, as implemented, our model did not prioritize semantic similarity over spelling similarity, leading to increased misidentified candidate NP names. Finally, the scalability of the manual validation process presents a hurdle as the amount of candidate names increases.
Future work
Our future work will involve assessing how different elements, such as the amount of noise used in data processing and the size of the train/validation data split, impact the model's training performance. We also plan to investigate alternative ways of handling data processing, including adding features to the data and creating model architectures that separately consider orthographical and semantic similarity. Moreover, we aim to expand our candidate identification process by mining candidates for a broader range of natural products. We will prioritize semantic similarity over spelling similarity to improve accuracy. Additionally, we will focus on enhancing the reliability of our methods to reduce the need for manual validation. We believe it is important to continue this work. As our methods of identifying the mention of NPs in AERs improve, we expect to pick up more NaPDI signals, enhancing patient safety through NPs pharmacovigilance. | Conclusion
A SM was trained to identify potential spelling variations of NP names. The SM model training terminated early at seventeen (17) epochs, achieving a validation accuracy of 0.97. In MRR evaluation, the SM performance was, at most, comparable to that of the fuzzy string-matching approaches. In the novelty experiment, GPM and SM performed similarly in identifying correct terms. The unique terms obtained were incorporated into the quarterly data collection process, resulting in the capture of 3,486 additional reports. By combining both the SM and GPM, a broader capture of NP names was achieved. Nonetheless, careful manual validation is still required for validation of the identified candidate names. Through this process of novel NP name discovery and interaction detection, we can help further research on natural product drug interactions. | Increased sales of natural products (NPs) in the US and growing safety concerns highlight the need for NP pharmacovigilance. A challenge for NP pharmacovigilance is ambiguity when referring to NPs in spontaneous reporting systems. We used a combination of fuzzy string-matching and a neural network to reduce this ambiguity. Our aim is to increase the capture of reports involving NPs in the US Food and Drug Administration Adverse Event Reporting System (FAERS). For this, we utilized Gestalt pattern-matching (GPM) and Siamese neural network (SM) to identify potential mentions of NPs of interest in 389,386 FAERS reports with unmapped drug names. A team of health professionals refined the candidates identified in the previous step through manual review and annotation. After candidate adjudication, GPM identified 595 unique NP names and SM 504. There was little overlap between candidates identified by each (Non-overlapping: GPM 347, SM 248). We identified a total of 686 novel NP names from FAERS reports. Including these names in the FAERS collection yielded 3,486 additional reports mentioning NPs.
Subject terms | Supplementary Information
| Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-023-51004-4.
Acknowledgements
We take this opportunity to acknowledge the excellent work from our collaborators from the University of Pittsburgh School of Pharmacy PhD.
Author contributions
I.O.D. wrote the main manuscript text and contributed programming expertise in the end-to-end implementation of the Siamese model, the fuzzy string-matching comparison, and the novelty experiments. He provided methodological contributions to the experiment comparing the Siamese model and the fuzzy string-matching and performed the analysis. T.B. contributed to the methodology and implementation of the Siamese model experiment. In addition, he conducted part of the data collection and preparation. S.K. contributed to the methodology and implementation of the Siamese model experiment. In addition, she conducted part of the data collection and preparation. S.B.T. provided extensive methodological contributions and programming expertise to the Siamese model experiment. She also coordinated and oversaw the novelty experiment's manual validation and adjudication process. In addition, she conducted part of the data collection and preparation. X.L. contributed relevant pharmaceutical expertise and participated in manually validating and adjudicating the resulting data from the novelty experiment. M.R.C. contributed relevant pharmaceutical expertise and participated in manually validating and adjudicating the resulting data from the novelty experiment. R.D.B. was the project coordinator of the whole research project and contributed intellectually to the methods of all the experiments, including the Siamese model, the fuzzy string-matching, and the novelty experiment. I.O.D., T.B., S.B.T., X.L., M.R.C., and R.D.B. contributed to the manuscript development and further revision.
Funding
This work was supported by the National Institutes of Health T15LM007059; and U54AT008909 for providing funding and resources for the completion of this research.
Data availability
The full list of natural product names identified in FAERS for the 70 NPs of interest can be found in the supplementary material. The data utilized for both training the Siamese Model and the identification of NP candidates through the combined approach is available as open access data through Zenodo: https://doi.org/ 10.5281/zenodo.8155759.
Code availability
The code and data utilized for this work are available from the following GitHub: https://github.com/dbmi-pitt/np_name_finder . The repository includes the code, configuration files, and data required to reproduce the work.
Competing interests
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | CC BY | no | 2024-01-15 23:41:55 | Sci Rep. 2024 Jan 13; 14:1272 | oa_package/f6/07/PMC10787736.tar.gz |
PMC10787737 | 38218750 | Background
In competition, physique athletes are subjectively judged and ranked on muscle size, proportion, symmetry, bodyfat levels, and posing ability on the day. Accordingly, stronger performers maximise these variables by implementing appropriate pre-competition nutrition and training strategies [ 1 , 2 ]. In recent studies, contest preparation typically consists of at least four months of energy and thus carbohydrate (CHO) restriction in conjunction with increased training volumes [ 3 – 5 ]. The final week leading into competition is termed “peak week” and involves further manipulation of nutrition and training variables to improve appearance, ostensibly by increasing muscle glycogen (and thus muscle size) while minimising subcutaneous water (supposedly enhancing muscular definition) and abdominal bloating [ 3 , 5 , 6 ]. Feasibly, a greater understanding of how to manipulate core nutritional factors around peak week, notably CHO, could result in a more successful “peak” and improved performance.
Glycogen is the storage form of glucose derived from dietary CHO, of which skeletal muscle is the largest store within humans [ 7 ] (see Fig. 1 for a graphical representation of the glycogenesis pathway). Muscle glycogen is heterogeneously distributed between and organised in three distinct subcellular compartments (intramyofibrillar, intermyofibrillar, and subsarcolemmal spaces) within myofibers [ 8 , 9 ]. The time course for full intramuscular saturation through supercompensation is variable and likely occurs 36–48 h following the cessation of the last exercise bout and CHO ingestion [ 10 – 12 ]. Amongst other factors, the rate of glycogenesis depends on total CHO and energy intake, sensitivity to and levels of serum insulin, prior glycogen depletion, muscle contraction-stimulated translocation of glucose transporters, gastrointestinal transport protein density, and relevant enzymatic activity [ 12 – 22 ]. Intramuscular glycogen size and density vary based on the subcellular site [ 23 , 24 ] and total muscle glycogen content, with the larger macroglycogen particles stored with greater saturation two to three days into loading [ 25 , 26 ]. Subcellular distribution is also dependent on training adaptations, where intermyofibrillar and subsarcolemmal glycogen content are greater in resistance-trained individuals than endurance-trained athletes [ 27 , 28 ]. While CHO loading can increase muscle size through muscle glycogen content [ 29 – 31 ], the effect of individual glycogen particle volume and its subcellular distribution on muscle size and appearance is unknown. Feasibly, a better understanding of these physiological processes would allow physique athletes to adopt more specific nutritional and training strategies to enhance performance.
CHO loading protocols were first studied in endurance athletes, measuring performance and muscle glycogen levels, with muscle glycogen supercompensation observed following depletion and CHO loading [ 35 – 38 ]. Physique athletes subsequently adapted such strategies, manipulating CHO intake and training to enhance the appearance of muscle size [ 6 ]. However, muscle size changes in response to a CHO load are rarely an outcome measure in endurance training research, and the impact of loading on appearance is not relevant to endurance athletes. Muscle size increases in physique athletes have only been observed recently within a quasi-experimental design [ 30 ] and two case studies [ 39 , 40 ], highlighting a paucity of empirical evidence to validate and guide these strategies. This review will highlight gaps in the literature, and subsequently provide suggestions for future research. Furthermore, relevant CHO loading trials are described while previously published information specifically relating to CHO manipulation strategies employed by physique athletes in peak week is examined. | Conclusions
Despite the extent of its effect on physique performance being largely unexplored, CHO manipulation strategies are widely employed by physique athletes [ 6 ]. Only one quasi-experimental trial, one limited experimental trial, and few observational studies have examined CHO loading in physique athletes—highlighting a need for further, well designed studies of the topic. Accordingly, experimental designs which closely mimic the nutritional and training practices of bodybuilders and the physiological conditions they are in during peak week will help both practitioners and athletes implement appropriate peaking strategies to maximise physique sport performance. Notably, ideal peaking protocols may differ by many factors that are not yet well-explored in the literature, including competitor division as well as specific performance enhancing drug-use (or lack thereof). As recruitment of physique competitors is understandably difficult [ 95 ], further quasi-experimental designs comparing more diverse samples of physique athletes who utilise different strategies may be a feasible alternative to elucidate the interactions of these variables on physique sport performance. | Background
Physique athletes are ranked by a panel of judges against the judging criteria of the corresponding division. To enhance on-stage presentation and performance, competitors in certain categories (i.e. bodybuilding and classic physique) achieve extreme muscle size and definition aided by implementing acute “peaking protocols” in the days before competition. Such practices can involve manipulating nutrition and training variables to increase intramuscular glycogen and water while minimising the thickness of the subcutaneous layer. Carbohydrate manipulation is a prevalent strategy utilised to plausibly induce muscle glycogen supercompensation and subsequently increase muscle size. The relationship between carbohydrate intake and muscle glycogen saturation was first examined in endurance event performance and similar strategies have been adopted by physique athletes despite the distinct physiological dissimilarities and aims between the sports.
Objectives
The aim of this narrative review is to (1) critically examine and appraise the existing scientific literature relating to carbohydrate manipulation practices in physique athletes prior to competition; (2) identify research gaps and provide direction for future studies; and (3) provide broad practical applications based on the findings and physiological reasoning for coaches and competitors.
Findings
The findings of this review indicate that carbohydrate manipulation practices are prevalent amongst physique athletes despite a paucity of experimental evidence demonstrating the efficacy of such strategies on physique performance. Competitors have also been observed to manipulate water and electrolytes in conjunction with carbohydrate predicated on speculative physiological mechanisms which may be detrimental for performance.
Conclusions
Further experimental evidence which closely replicates the nutritional and training practices of physique athletes during peak week is required to make conclusions on the efficacy of carbohydrate manipulation strategies. Quasi-experimental designs may be a feasible alternative to randomised controlled trials to examine such strategies due to the difficulty in recruiting the population of interest. Finally, we recommend that coaches and competitors manipulate as few variables as possible, and experiment with different magnitudes of carbohydrate loads in advance of competition if implementing a peaking strategy.
Key Points
Physique athletes regularly implement “peak week” strategies based on the endurance training research. At present it appears that carbohydrate (CHO) loading strategies may increase muscle size; however, the effects on overall aesthetic performance are unknown. Due to a lack of data, it is difficult to make detailed peak week recommendations. Rather, it may be advisable to load with 3–12 g/kg/BM of CHO to increase muscle glycogen content, with this broad range representing different individual and divisional requirements. To optimise the magnitude of CHO load, coaches and competitors could establish an individual response pattern before competition by practicing and trialling peaking strategies in similar physiological conditions to peak week, and by using information from previous competitions. Further, manipulating as few variables at a time as possible could have the greatest physiological and psychological benefits. Experimental designs which assess visual physique changes while placing participants in ecologically valid physiological conditions are needed to fully elucidate the effects of CHO, water, and electrolyte manipulation peaking strategies.
Keywords | Carbohydrate Manipulation Practices in Endurance Athletes and Application to Physique Athletes
Carbohydrate Loading Studies in the Endurance Literature
The study of interactions between muscle glycogen content, diet, and exercise performance began with a series of Swedish experimental trials in the 1960s utilising the then novel percutaneous muscle biopsy technique [ 35 – 38 , 41 – 43 ]. The aim of this research was to investigate the effect of muscle glycogen as a stored energy substrate on endurance performance and the determinants of subsequent glycogenesis. While the effects of CHO loading on appearance lack relevance to endurance athletes, the findings of these trials have implications for physique athletes seeking to increase muscle glycogen content and enhance muscle size. Of the designs which manipulated diet, muscle glycogen supercompensation was observed from the consumption of a predominantly CHO diet following exhaustive, glycogen depleting exercise [ 35 – 38 ]. Further experimentation with large CHO loads scaled to bodyweight (ranging from 9 to 12 g/kg/day) for two to three consecutive days yielded significant muscle glycogen increases within the context of endurance training [ 44 – 49 ]. For example, McInerny et al. [ 47 ] depleted muscle glycogen content from 435 ± 57 to 96 ± 50 mmol/kg dry weight (DW) ( p < 0.01) in the vastus lateralis of six well-trained endurance athletes with an exhaustive cycling protocol. Two days of CHO loading with 12 g/kg/day following the protocol resulted in supercompensation to 713 ± 60 mmol/kg DW ( p < 0.01).
Similarly, Goforth et al. [ 49 ] implemented a three-day exercise and diet-induced (53 ± 9 g CHO/day) glycogen depleting protocol followed by a three-day repletion (720 ± 119 g CHO/day) without exercise in 14 male endurance athletes. Muscle glycogen content in the vastus lateralis increased from 408 ± 168 to 729 ± 222 mmol/kg DW ( p ≤ 0.05). This supercompensated state was then maintained over the next two days with a moderate-CHO intake (332 ± 41 g). The preservation of muscle glycogen following supercompensation [ 49 , 50 ] could be advantageous for physique athletes who prefer to load CHO earlier in the week, further away from competition. Indeed, this protocol is known as CHO “front-loading”, whereby competitors load at the start of peak week which theoretically allows more time to adjust nutritional intake according to appearance [ 5 , 6 ].
In another study, Nygren et al. [ 31 ] leveraged magnetic resonance imaging to show vastus lateralis (+ 3.2%, p = 0.001) cross-sectional area and thigh circumference (+ 2.7%, p = 0.009) increases, coinciding with increased muscle glycogen content from 281 ± 42 to 634 ± 101 mmol/kg DW in five male participants. These changes were due to a four-day glycogen depleting protocol involving a low-CHO, high-fat diet with exhaustive exercise followed by four days of a high-CHO and low-fat diet without exercise. While promising, a small sample size and accordingly reduced statistical power constrains the generalisability of the results. Nonetheless, these findings indicate that intramuscular glycogen content changes may affect muscle size.
Hypothetically, glycogen-mediated muscle size increases are driven by increased intramuscular water as water molecules are bound to each stored glycogen particle [ 51 – 53 ]. The water bound to each particle is variable and seemingly determined by hydration status [ 53 ], although glycogenesis is likely not impaired by dehydration [ 54 ]. In a dehydrated state, Olsson and Saltin [ 52 ] concluded that at least three to four grams of water are stored intramuscularly with each gram of glycogen; however, changes in water content were measured at the whole-body level using tritium labelled water and not directly in muscle tissue.
Within a crossover trial that measured intramuscular water via muscle biopsy samples, Fernández-Elías et al. [ 53 ] created two experimental conditions where a CHO syrup was consumed with or without a rehydrating volume of water following cycling in the heat. Both groups consumed a CHO drink, with the rehydrating group consuming additional water to match individual fluid losses. Although both groups experienced similar glycogen repletion four hours following ingestion, muscle water content was higher in the rehydrating group than the non-rehydrating group (3814 ± 222 vs. 3459 ± 324 g/kg DW, p < 0.05), with 17 g of water bound to each gram of glycogen in the rehydrating group compared to only 3 g in the non-rehydrating group; accordingly, substantially increasing muscle volume via concurrent CHO and fluid ingestion may be relevant in the context of physique athletes. However, as muscle water content did not reach baseline levels in either group, strategies involving dehydration may not be advisable. It is also unknown if emphasising hydration status in physique athletes could impact the appearance and performance in other ways, as some authors hypothesise that higher levels of body water increase subcutaneous tissue thickness (ST), which may obscure muscular definition, while acknowledging that the efficacy of strategies to manipulate hydration status requires further examination [ 1 ].
Dissimilarities Between Endurance and Physique Athletes
The theoretical underpinning and rationale for physique sport CHO loading protocols was born from endurance research. However, as endurance athletes are unconcerned with the aesthetic effects of CHO loading, research on the topic is not necessarily relevant or practical for physique athletes. Furthermore, the physiology of physique athletes at the end of contest preparation may be different from that of the typical endurance athlete. While some physique athletes potentially engage in high volumes of cardiovascular exercise [ 55 – 57 ], the prolonged periods of dieting, characterised by extreme reductions of both CHO and fat with the goal of achieving exceptionally low body fat, far below endurance athletes [ 39 , 58 – 60 ] prior to CHO loading, differentiate the athletes. Additionally, physique athletes’ serum insulin concentrations decrease throughout contest preparation, considerably below the reference range in the week preceding competition [ 58 , 59 ]. Given these physiological differences, it is difficult to directly apply literature-based endurance protocols to physique sport and doing so may not enhance aesthetic performance.
Unlike physique athletes during peak week, the goal of the endurance athlete is to fully saturate muscle and liver glycogen stores to reduce the likelihood of muscle glycogen depletion and hypoglycaemia, and their negative performance effects [ 34 , 61 , 62 ]. Endurance athletes likely have greater glycogenesis rate and capacity compared to physique athletes in peak week from their habituation to a high-CHO diet and the absence of extensive energy restriction. Thus, implementing endurance-based protocols in physique athletes may lead to greater CHO consumption than can be digested and absorbed in the gastrointestinal tract and synthesised as glycogen before competition [ 16 , 19 , 63 – 65 ]. This is especially relevant as physique athletes theorise that when CHO consumption exceeds total glycogen storage capacity and/or the maximal rate of glycogenesis, glucose accumulates in other body compartments, including the interstitial space of the subcutaneous layer [ 5 ], increasing compartmental fluid volume from the osmotic effect of glycogen on water [ 52 ]. This rise in subcutaneous water is thought to blur definition, an effect known as “spilling over” which detracts from muscle definition—often called “conditioning” in bodybuilding circles [ 1 , 5 ]. Hence, the implementation of CHO loads of the same magnitude as utilised by endurance athletes may not translate to competitive success in physique sport.
The Female Menstrual Cycle and Implications for Physique Athletes
In addition to the considerations described above, other physiological variables may be relevant. Notably, the effect of the menstrual cycle on glycogenesis following CHO loading in endurance athletes has been examined. For example, glycogen storage capacity decreases and the efficacy of supercompensation increases during the follicular phase, while the inverse occurs in the luteal phase [ 66 ] (see Fig. 2 ). Although the underlying mechanisms have yet to be fully understood, and a comprehensive examination is beyond the scope of this paper, menstrual phase-specific differences may be mediated by increased expression of oestrogens on glycogen synthase, insulin secretion, and adipocyte free-fatty acid oxidation [ 67 – 70 ]. Thus, muscle glycogen storage is theoretically elevated in the luteal phase compared to the early follicular phase [ 67 ]; however, large CHO loads have induced supercompensation to similar values in both menstrual phases in some trials [ 46 , 71 ], but not in others [ 72 , 73 ]. Given this ambiguity, it is difficult to make menstrual cycle phase-specific recommendations for CHO loading magnitudes for female competitors. Furthermore, female competitors commonly experience menstrual cycle disruption and hypothalamic amenorrhea close to competition due to low adiposity and energy availability from extreme dieting [ 74 – 80 ]. Chronic low energy availability reduces oestrogen and progesterone levels below-normal physiological ranges [ 81 ], which may impair muscle glycogen storage following a CHO loading protocol.
The theoretical variability in response to CHO intake during different phases of the menstrual cycle, or with hypothalamic amenorrhea, highlights the importance of individualised nutritional approaches to physique sport peaking. To better anticipate aesthetic changes and establish an individual response pattern, female competitors may benefit from experimenting with different CHO loads throughout the menstrual cycle during contest preparation (assuming it is present). Such an approach may provide information on CHO load magnitude and timing to inform future peaking strategies. Male competitors could also benefit from individualisation trial runs, potentially to a greater degree than their female counterparts, as their physiological response may be more consistent, although research is needed to confirm if relevant sex differences exist.
Observational Designs in Physique Athletes
Cross-Sectional Designs
While studies regarding CHO loading in physique competitors are likely more relevant than those concerning endurance performance, they are rare. Nevertheless, the few cross-sectional examinations that exist (summarised in Table 1 ) provide insight into peaking strategies employed by physique athletes. For example, in a recent survey of peaking strategies, Chappell and Simper [ 6 ] reported that 91% of a sample of 81 natural British bodybuilders ( M = 59, F = 22) implemented some form of CHO manipulation. Of the peak week strategies included in the 34-item questionnaire, CHO manipulation was the most employed, where restriction was followed by loading in competitors who utilised both. Qualitative responses indicated that both restriction and loading phases lasted up to four days, with the aim of depleting muscle glycogen stores before inducing supercompensation to increase muscle size. Specific competition-day strategies were also recorded, with 71.6% consuming high-glycaemic index CHO 30–60 min prior to competition and 39.5% CHO loading. While surveying only a specific sample of physique athletes, these data indicate that CHO manipulation strategies are prevalent and popular.
Similarly, albeit with a smaller sample, Mitchell et al. [ 82 ] interviewed seven experienced bodybuilders (10.4 ± 3.4 years’ experience and 14.3 ± 5.9 competitions) to elucidate their adopted contest preparation nutritional strategies and associated rationale. Six participants used a modified CHO loading regimen involving increasing CHO and concurrently tapering training. Specifically, one participant detailed having a higher intake (400 g) earlier in the week preceding two to three days of modest restriction (as low as 250 g) before increasing CHO to 300-400 g the day preceding competition. Four participants also reported implementing a CHO “backload”, which involved a three-day depletion followed by loading. Notably, there was dissatisfaction with the protocol, due to its perceived inability to induce appreciable changes in appearance and the psychological distress caused.
Experiences of psychological distress (i.e. increased tension, anxiety, anger, depression, and fatigue) are in line with studies of bodybuilders indicating prominent mood disturbances around the end of contest preparation [ 59 , 74 , 83 ]. Mood states likely degrade during contest preparation due to the extended period of energy restriction leading to low energy availability and the very low bodyfat levels achieved, far below competitors’ lower intervention point [ 84 ]. Mood disturbances could also be attributed to competition-day anxiety, potentially amplified by CHO loading prompting fears of “spilling over”. Researchers have proposed that psychological stress can negatively affect appearance through increased secretion of adrenocortical hormones, intensifying sodium reabsorption and potentially expanding extracellular fluid volumes [ 1 , 85 ]; however, the effect of such water retention on appearance is unexplored. Thus, further investigation into the effects of CHO manipulation strategies on mood disturbances over the entirety of peak week and quantifiable physique changes is required to determine associations of mood states with physique sport performance.
Single-Subject Designs
While long-term case studies examining bodybuilders pre- and post-competition have been published, few report peak week strategies or their possible effects [ 59 , 78 , 86 ]. A recent case study by Barakat et al. [ 40 ] is the most detailed examination of the effects of CHO manipulation on body composition outcomes to date; specifically, a natural male competitor followed a peak week protocol devised by the research group [ 1 ]. CHO consumption on the first day of data collection (nine days out from competition) was 297 g, which was reduced to 88, 73, and 88 g the preceding three days of depletion (six to four days out), respectively. CHO loading involved 582 g and 573 g the following two days (three to two days out), respectively, before tapering to 399 g the day before competition. The pattern of fat intake was inverse to CHO, where the highest intakes (86-132 g) occurred during CHO depletion. Finally, water intake also followed a somewhat similar pattern to CHO consumption from nine to two days out, with the lowest intake on the final day before competition. This was described as an attempt to reduce body water while preserving intramuscular glycogen and triglyceride stores with the cessation of physical activity.
Overall, there were favourable outcomes due to these combined strategies. The sum of ultrasound measures of muscle thickness (MT) collected from four sites (distal and proximal quadriceps, chest, and elbow flexors) was positively associated with CHO intake from the previous day throughout peak weak ( τ = 0.733, p = 0.056). Prior to depletion, the sum of MT was 18.56 cm which increased to 18.99 cm the morning of competition. Relative quadriceps and chest MT increased, while elbow flexors decreased when comparing measurements from the previously mentioned data collection points. Indeed, total MT (+ 2.32%) and ST (− 0.67%) alterations were observed from the start of the protocol, as desired. With that said, it is challenging to untangle the individual effects of any single aspect of the combined peaking strategy within a case study design, which included manipulations of CHO, water, and dietary fat.
For instance, it is debatable whether CHO restriction is required to induce subsequent maximal glycogen supercompensation. Notably, equivalent and maximal muscle glycogen supercompensation can be achieved without prior cessation of dietary CHO [ 10 , 11 ], which may indicate that depletion is not necessary, and leaves the question of whether comparable body composition changes could have been achieved with a more consistent CHO intake. Likewise, the strategy employed by Barakat et al. [ 40 ] of increasing fat intake while depleting CHO is known as “fat-loading” and is an attempt to increase intramuscular triglyceride content and thus muscle size. While no experimental evidence exists on fat-loading, this approach is rationalised by the comparable energy contents of intramuscular triglyceride being higher than glycogen [ 87 ]. However, as appreciable muscle size changes are likely driven by the water bound to glycogen rather than its energy density, the extent to which fat-loading increases muscle size may be negligible and the practice may simultaneously increase ST, as there is no known mechanism for preferentially storing triglycerides intramuscularly rather than subcutaneously.
Most importantly, it is difficult to determine the “visual” effects of this protocol on the participant’s physique, as there was no subjective judging or quantification of the competitor’s appearance. While anthropometric measurements indicated success, there are no data which correlate anthropometric changes with visual changes. Notably, the lack of visual, subjective assessments (e.g. photograph physique score changes on a 1–10 scale by a panel of qualified physique judges) is a persistent limitation of physique athlete case studies.
Another case study, conducted by Schoenfeld et al. [ 39 ], documented the effect of CHO loading on MT during contest preparation. In the final week before one of the participant’s four competitions, ultrasound MT was obtained at four sites (elbow flexors and extensors, midthigh and lateral thigh). Measurements were collected following a three-day depletion phase, the subsequent two-day loading phase, and finally one hour after the previous measure following CHO ingestion. The athlete decreased energy to 1474–1642 kcal/day and CHO to 20–46 g/day during depletion, lower than the lowest two-week rolling average intake during contest preparation (1953 kcal and 104 g/day), which was then increased during loading to 3374–3537 kcal/day and 449–483 g/day, for energy and CHO, respectively. The authors reported 5% and 2% upper arm and quadriceps MT increases, respectively, at the post-loading measurement compared to the post-depletion phase, and no changes following the post-loading 50 g CHO bolus. While MT increased after loading, the increases were observed post-depletion. However, the authors did not provide baseline MT data before depletion, whether the post-loading MT values improved upon pre-depletion values remain unknown. Thus, the efficacy of the strategy cannot be assessed since it is possible that similar final MT values could have been achieved without a peak week strategy. Future research should compare baseline outcome measures with post-depletion and loading values to better evaluate peaking strategies.
Additionally, further case studies provide indirect insight into the effects of CHO manipulation on body composition. For example, Rossow et al. [ 59 ] followed a white, male professional natural bodybuilder for 12 months pre- and post-competition. The authors reported increased body water (60.48–62.12L) and decreased body fat (6.6–4.5%) and sum of ultrasound ST (11 sites, 0.85–0.68 cm) a week before competition versus a month prior. These changes corresponded with the highest weekly mean energy intake and a marked blood glucose increase from three months prior (52–72 mg/dL). While total CHO intake was unreported, the increased energy intake, body water, and blood glucose may be attributed to increased CHO as part of a peaking strategy. Similarly, Halliday et al. [ 75 ] reported a modest increase in mean CHO intake to 3.8 g/kg in the final week of a female figure competitor’s contest preparation from 3.4 and 2.7 g/kg at weeks one and 10, respectively. Energy intake was also the highest recorded since week three of contest preparation, corresponding with a skinfold thickness reduction from four weeks prior. However, as CHO intake was reported as weekly means and not as specific daily intakes, it is difficult to discern if a specific peaking protocol was implemented. Despite indications of potential CHO manipulation in both Rossow et al. [ 59 ] and Halliday et al. [ 75 ], it is difficult to interpret which specific protocols were implemented and their potential efficacy.
While a unique nutritional intake during “peak week” which includes CHO manipulation itself is a popular strategy amongst physique athletes [ 6 ], the specific pattern and magnitude of CHO can vary widely. For example, Steen et al. [ 88 ] documented the use of a traditional CHO loading regimen by a drug-enhanced male bodybuilder. The competitor restricted CHO for three days before loading with 300 g the day before and on competition day. Likewise, Hickson et al. [ 89 ] also detailed the use of a similar protocol by another enhanced male bodybuilder, who depleted CHO for two days before loading with only 100 g for the next three days before competition. Contrastingly, a very high intake of CHO was captured within a clinical case report of a professional bodybuilder admitted to intensive care due to bilateral lower limb paralysis [ 90 ]. The athlete reported consuming minimal CHO in the month preceding competition before loading with 800 g of high-glycaemic index CHO on competition day. While no anthropometric data were collected in these case studies [ 88 – 90 ], they highlight substantial variability in peak week approaches. All relevant case studies are summarised in Table 2 .
Multiple-Subject Designs
While the physique sport literature predominantly consists of case studies, there are some multiple-subject studies which may provide more generalisable findings (see Table 3 for a summary of multiple-subject observational studies). Bamman et al. [ 29 ] followed six male bodybuilders for twelve weeks preceding competition. Unfortunately, despite stating a CHO load commenced 72 h before competition and reporting a mean CHO intake (290 ± 73 g) from a three-day dietary profile completed the same day as the commencement of loading, day-to-day dietary intake was undisclosed. In the final 24–48 h preceding competition during CHO loading, ultrasound biceps MT reportedly increased (+ 4.9%), while the ST measure from the same site had decreased (− 29.4%) from six weeks prior; however, the results should be interpreted with caution, since neither met the threshold for statistical significance ( p > 0.05). Further, due to the unclear results, the time between data collection and the lack of detailed day-to-day nutritional information, direct causal inferences cannot be drawn from this study.
In two studies which assessed dietary intakes but did not track body composition changes of female bodybuilders, CHO intake increased in the immediate days prior to competition [ 91 , 92 ]. Walberg-Rankin et al. [ 93 ] reported increased CHO consumption two days before competition compared to data collected one and three weeks prior. Specifically, this involved an almost twofold group-level CHO intake increase (202.7–385.9 g, p = 0.001), accounting for 83% of total energy. Similarly, Lamar-Hildebrand et al. [ 92 ] drew comparisons between in-season and off-season bodybuilders and made similar observations. The competitors increased energy intake (1283 ± 789 to 2228 ± 1192 kcal) on the weekend of competition, driven by higher CHO consumption (222 ± 149 to 359 ± 194 g). While these group-level observational studies demonstrate the use of CHO loading strategies amongst female bodybuilders and their magnitudes, the efficacy of these practices cannot be determined due to the absence of body composition data. To summarise, both case study and multiple-subject observational studies indicate that CHO manipulation is a common strategy amongst physique athletes; however, the positive impact on anthropometry hinted at by this literature remains an untested assumption.
In addition to CHO manipulation, physique athletes may concurrently manipulate electrolyte and water intake when peaking [ 94 ]. This practice is intended to increase intracellular water (ICW) while decreasing extracellular water (ECW), supposedly to expand muscle and reduce subcutaneous water, respectively [ 1 , 40 , 95 ]. This theory is rationalised by the high concentration of sodium and potassium in ECW and ICW, respectively, associated with cell fluid volume (i.e. the sodium potassium pump) [ 96 ]. Consequently, bodybuilders and researchers propose that increasing potassium while reducing sodium intake alters cellular concentrations of these ions, which when combined with increased muscle glycogen content, creates an osmotic gradient for interstitial water to be drawn into muscle [ 1 , 40 , 95 ]. The proposed outcome of such process is a favourable ICW/ECW ratio, which may enhance the appearance of muscle fullness and definition [ 1 ]. As such, techniques to estimate the distribution of fluid compartments in the context of peaking are of interest to the physique sport population [ 40 , 95 ].
To examine if such fluid shifts are indeed achieved by bodybuilders via peaking protocols, researchers have adopted bioelectrical impedance analysis (BIA) and bioelectrical impedance spectroscopy (BIS). For example, Nunes et al. [ 95 ] employed a single-frequency BIA device to compare competition-day body water fraction changes from the day prior in 11 male competitors. Each participant achieved simultaneous ICW increases and ECW decreases, increasing their ICW/ECW ratio as presumably intended. While the lack of dietary data is a limitation, the authors hypothesised the bodybuilders manipulated CHO, electrolytes, and water, causing these outcomes. While promising, methodological limitations complicate these findings [ 95 ]. In particular, hydration status, diet, and acute water intake were, understandably, uncontrolled. Unfortunately, single-frequency BIA results are sensitive to and impacted by these variables [ 97 ]. Additionally, and most importantly, single-frequency BIA cannot distinguish between intracellular and extracellular fluid compartments, as multiple frequencies, from devices such as multi-frequency BIA or BIS, are required to do so [ 98 ]. Thus, a prediction equation developed by Matias et al. [ 99 ] was utilised by Nunes et al. [ 95 ]; unfortunately, since the equation was derived from high-level non-physique athletes, disparities in the body geometries between the sample used for calibration and physique athletes probably inflated the already unacceptably high expected fluid compartment error estimations (± 3.6–6 kg of fluid). Further, the testing conducted by Matias et al. [ 99 ] to develop the equation was highly standardised, whereby participants were required to have been fasted for 12 h, be euhydrated, and not have exercised in the past 15 h, which likely differed from the testing conditions of Nunes et al. [ 95 ]. These methodological shortcomings and error rates confound interpretation, and likely account for the highly homogenous competition-day ICW/ECW ratios (1.92 ± 0.01L) reported by Nunes et al. [ 95 ], while also highlighting the difficulty of standardising BIA measurements of physique athletes during peak week.
Compared to the BIS-derived raw bioimpedance results from the aforementioned case study by Barakat et al. [ 40 ], a smaller competition-day ICW/ECW ratio (+ 3.87%) increase was reported from the day prior in comparison to Nunes et al. (+ 20%) [ 95 ], likely due to the different devices employed. BIS devices possess superior predictive capabilities compared to BIA as they use a spectrum of frequencies to differentiate ICW and ECW [ 98 , 100 ], making the use of regression-derived population-specific prediction equations to estimate fluid compartments unnecessary [ 98 , 101 ]. However, limitations still exist even within BIS. Specifically, device validation in different populations is required, as inherent body geometry and composition variations exist [ 98 ]. This limitation was present in Barakat et al. [ 40 ], as the extreme body geometry and composition of the participant likely diverged from the assumptions of the BIS device’s in-built equations.
Notwithstanding this limitation, an increased competition-day ICW/ECW ratio from the day prior was also reported by Barkat et al. [ 40 ] who also examined the effects of their peaking strategy on fluid compartment shifts. Curiously, however, the highest reported ICW/ECW ratio was three days prior to competition, the morning after the depletion phase when MT was at its lowest and ST at its second highest. Given the relationship proposed by Escalante et al. [ 1 ], Barakat et al. [ 40 ], and Nunes et al. [ 95 ] that a high ICW/ECW ratio should coincide with the best combination of MT increases and ST decreases, and therefore best appearance, it is plausible that either the proposed relationship is incorrect or that bioelectrical impedance derived ICW and ECW may not accurately represent body water changes during peak week.
Indeed, regarding this proposed relationship, attempting to induce such fluid shifts with the restriction of water and sodium while loading potassium—as commonly practiced by physique athletes—could even degrade aesthetic performance. Dietary sodium reductions may slow small intestine glucose absorption due to its down-regulating effect on the concentration of brush border GLUTs [ 16 – 18 ], while also reducing the concentration of sodium ions required for SGLT1 cotransport of glucose [ 102 – 105 ]. Additionally, SGLT1 and GLUT5 density and activity are lowered with a CHO-free diet [ 103 ]. While these adaptations begin within four hours of CHO exposure [ 106 ], it may take several days for appreciable increases in SGLT1 expression to occur [ 107 ], potentially slowing glucose absorption when initially loading CHO following depletion and sodium restriction. Furthermore, blood pressure decreases during the final weeks before competition [ 59 ], which would likely be compounded by sodium restriction [ 108 ]. Such blood pressure reductions would be disadvantageous for competitors seeking transient muscle size and definition increases from active hyperaemia and the accumulation of metabolites following a pre-stage “pump-up” routine [ 5 , 109 , 110 ]. Thus, it may even be advisable to increase sodium consumption on competition day for certain divisions due to its acute effect on raising plasma volume and blood pressure (albeit requiring further research to confirm the efficacy of this strategy) [ 5 , 111 – 113 ].
This strategy is often justified by the misconception that ICW and ECW are equivalent to intramuscular and subcutaneous water, respectively, and that by increasing ICW via glycogenesis, water restriction will preferentially lead to higher proportional ECW decreases [ 1 ]; however, including water restriction as part of a peaking strategy may be deleterious for competitors. While intracellular fluid is indeed the major skeletal muscle fraction, it is also comprised of a non-negligible amount of extracellular fluid [ 114 , 115 ]. Skeletal muscle is approximately 70–75% fluid [ 116 , 117 ], and total muscle water content is reduced during dehydration [ 118 , 119 ], potentially affecting muscle size. Intravascular plasma is also extracellular fluid [ 114 , 120 , 121 ]; thus, blood volume reductions from water restriction may impair the delivery of glucose to myocytes and therefore the efficacy of CHO loading. While the osmotic effect of glucose induces acute water shifts within these compartments [ 122 ], water balance and the concentration of ions are tightly regulated by homeostatic mechanisms [ 123 , 124 ]. It has been proposed that the temporal lag in re-establishing homeostasis following water loading could be leveraged to increase urine output and therefore water excretion during subsequent restriction to reduce ECW, where increased intramuscular glycogen from CHO loading may preserve or increase muscle water and thus size [ 1 ]. However, there was a moderate relationship between TBW and ECW ( r = − 0.44. p < 0.05) in physique competitors with varied approaches to water intake during peak week as recently observed by Escalante et al. [ 125 ]. This indicates that the proportion of ECW is greater when TBW is reduced, which suggests that the competitors were not able to preferentially reduce ECW through peaking strategies. As the appearance of the participants was not subjectively evaluated, in addition to a lack of experimental evidence, the combined effect of water and electrolyte manipulation on the appearance of muscle and its time course is unknown. Furthermore, Escalante et al. [ 1 ] recommended that water and CHO manipulations be planned and practiced before peak week, or to be kept relatively constant if such practice runs are not feasible, highlighting the potential for performance decrements with such strategies.
Notably, a cross-sectional study examining the diets and metabolic profiles of male and female high-level drug-enhanced bodybuilders found that blood sodium levels were within normal ranges 24 h prior to competition [ 126 ]. This was despite the deliberate restriction of dietary sodium, evidenced by strategies such as the deliberate shift from tap water to distilled, to reduce fluid retention. As such, it seems unlikely that electrolyte and water manipulation substantially alter the concentration of sodium ions to induce the desired fluid shifts. In fact, if successful, such practices may increase the risk of life-threatening conditions such as hyperkalaemia and rhabdomyolysis, especially when combined with diuretics and anabolic steroids [ 127 , 128 ]. Based on the physiological reasoning provided and the previously discussed studies not observing competitor appearance changes [ 40 , 95 ], it is difficult to assert that such fluid shifts and the nutritional strategies intended to induce them occur as expected or are favourable for physique sport performance.
In summary, while observational studies document the implementation of CHO manipulation protocols by physique athletes and suggest that these techniques may increase muscle size, limited study numbers and methodological concerns confound interpretation. Furthermore, we present our arguments against certain strategies (such as water and electrolyte manipulation) which are predicated on physiological mechanisms rather than empirical evidence. Such proposed strategies may indeed improve appearance; however, to determine if that is the case requires rigorous and controlled investigations.
Experimental Designs
A Quasi-Experimental Design in Physique Athletes
Arguably the most relevant study of peak week was conducted by de Moraes et al. [ 30 ]. The researchers stratified 24 male bodybuilders into two groups, delineated by whether CHO was loaded or not before competition. Notably, MT appeared to increase following a 24-h CHO load after three days of depletion. Both groups increased daily CHO intake following depletion, with the loading group increasing to 9.0 ± 0.7 g/kg BM from 1.1 ± 0.4 g/kg BM compared to the non-loading group increasing to 5.2 ± 0.9 g/kg BM from 0.9 ± 0.6 g/kg BM. The loading group increased both elbow flexor (+ 3.1%, p < 0.05) and triceps brachii (+ 3.4%, p < 0.05) MT, whereas there were no increases within the non-loading group. The loading group also improved their physique silhouette scores on a scale developed by Castro et al. [ 129 ]. The competitors were evaluated using the silhouette scale by seven official bodybuilding judges blinded to the intervention, indicating that CHO loading may positively influence subjective measures of muscle size. However, a limitation of the silhouette scoring system employed is that any changes in the appearance of leanness may not be distinguished or quantified. Furthermore, skinfold measures were not collected at the second point of data collection, meaning the effect on ST could also not be determined. For future research, assigning a score for both muscle size and definition when subjectively evaluating the appearance of competitors may allow for further detail on the effects of peaking strategies to be uncovered.
Measures of abdominal and epigastric symptoms were also collected and compared between groups [ 30 ]. Constipation was the most prominent gastrointestinal symptom in both groups following depletion, which persisted within the non-loading group at the second point of data collection (2.00 ± 0.67 to 2.13 ± 0.81, p > 0.05). Contrastingly, incidences of constipation decreased in the loading group (1.89 ± 0.57 to 1.53 ± 0.72, p < 0.05) while diarrhoea increased (1.22 ± 0.42 to 1.93 ± 0.37, p < 0.05). This is potentially the result of drastically increasing CHO beyond the emptying rates of the stomach and gastrointestinal tract [ 16 ], where glucose transporters are seemingly downregulated following CHO restriction [ 130 ]. Interestingly, both groups’ total scores of gastrointestinal symptoms increased (loading group = 14.9 ± 0.22 to 16.93 ± 0.24, p < 0.05 vs. non-loading group = 13.88 ± 0.28 to 14.21 ± 0.31, p < 0.05). This finding may be indicative of competition stress, irrespective of CHO intake, since acute stressors can slow gastric emptying rates [ 131 ]. Thus, competition stress may contribute to the slowing of gastrointestinal glucose absorption and subsequent glycogenesis, as well as to gastrointestinal distress. The findings of de Moraes et al. [ 30 ] further highlight the utility of experimenting with different CHO loads prior to competition, as individualising the CHO loading protocol (i.e. the timing, quantity, and type of CHO) could maximise the rate of glycogenesis while minimising gastrointestinal symptoms. Such experimentation may confer some physiological and psychological benefits [ 132 – 134 ] associated with intermittent dieting or “refeeding”, while allowing for competitors to become (re)accustomed to large volumes of CHO.
An Experimental Design
In the only experimental design to date, Balon et al. [ 135 ] intended to replicate a CHO loading protocol employed by bodybuilders with a crossover design. In conclusion, no significant muscle girth increases were reported following a two-day CHO loading regimen. The protocol involved a three-day isoenergetic, low-CHO diet (10% of diet) followed by an isoenergetic, high-CHO diet (80% of diet) for days during the experimental arm, while the control arm participants consumed an isoenergetic, moderate-CHO diet (55% of diet).
Unfortunately, this study did not replicate the peak week conditions of bodybuilders. Notably, the mean body fat percentage of the participants was 10 ± 1%, which is much higher than the values of 4.4–6.3% typical of high-level male bodybuilders in the final week of competition [ 39 , 40 , 80 , 126 ]. The participants also had not dieted with a reduced CHO intake for months prior to the study. This detail is salient as contest preparation may induce chronic glycogen depletion which could subsequently impair glycogenesis. Further, the participants consumed an isoenergetic diet during depletion, whereas CHO loading physique athletes are initially in a severe energy deficit which would cause greater glycogen depletion prior to loading [ 3 ]. The participants also altered the proportion of CHO rather than increasing their energy intake with additional CHO, which may not have maximised glycogenesis [ 12 , 15 ].
Furthermore, a high-volume resistance training protocol of 30–35 sets to or very close to failure was performed daily during depletion, which may vary from typical practices of bodybuilders (~ 50% higher than that used by natural bodybuilders [ 56 ]) who often decrease training stress during peak week [ 82 , 88 , 92 ]. Such high set volume and intensity during CHO restriction may have caused muscle damage and sarcolemmal membrane disruption, possibly impairing glycogenesis in the subsequent CHO load [ 136 – 140 ]. Indeed, it may be advisable to not train with high volumes, in close proximity to failure, as well as not performing exercises which train muscles at long lengths under heavy eccentric loads [ 136 – 143 ] during peak week to avoid excessive muscle damage.
Finally, while the authors did not report muscle girth increases, it is plausible that a visual change in the appearance of the muscle and overall aesthetic could have occurred. Therefore, further ecologically valid experimental research examining visual changes by judges of the relevant physique sport division with body composition measures is required to determine the effects of peaking strategies on bodybuilding performance. Both experimental designs are summarised in Table 4 .
Practical Applications
Based on the current evidence, making specific peaking recommendations to improve physique sport performance is difficult. Nevertheless, some practical guidance to prospective athletes and coaches wishing to adopt peaking strategies can be provided. For example, loading with 3-12 g/kg/BM of CHO may increase muscle size; however, the exact amount likely is dependent on the requirements of the individual and division of competition (i.e. male bodybuilders likely require more CHO than bikini competitors due to a greater emphasis on muscularity). Thus, it is likely advisable that competitors and coaches test different CHO loading magnitudes and strategies well in advance of competition day in comparable physiological conditions (i.e. very low levels of adiposity, typically one to two months away from competition). Visual changes and the time course for the CHO load to “take effect” and alter the competitor’s physique as well as the quantity and type of CHO consumed should be recorded to inform future peak week strategies to increase their reliability. Such practice runs also present competitors the opportunity to habituate to high acute CHO intakes and reduce gastrointestinal stress [ 12 , 16 ]. Additionally, using information from previous competitions to guide future practice is recommended. Establishing an individual response pattern could be especially valuable for female competitors whose rate of glycogenesis and glycogen storage capacity may be impacted by disruptions to the menses typically seen in contest preparation [ 74 – 80 ]. Thus, it would be prudent for coaches and competitors to experiment with differing loads during different phases of the menstrual cycle (or in its absence) prior to competition to better anticipate visual changes.
During peak week, avoiding strategies that drastically alter nutritional variables from previous weeks may be sensible. These alterations, which include the substantial manipulation of CHO, water and electrolytes, and the introduction of new foods, could introduce the risk of unpredictable and deleterious effects if not executed appropriately. For example, loading with too much CHO may reduce the appearance of muscle definition. Additionally, depleting glycogen prior to loading may be unnecessary to achieve maximal glycogen supercompensation [ 10 , 11 ], and thus, competitors can avoid extremely low-CHO intakes during peak week which may incur unnecessary psychological stress and reduce training quality [ 30 , 59 , 74 , 144 ]. However, without experimental data to confirm our suppositions, it is possible that this approach could be advantageous in some cases (i.e. a competitor requiring lower body fat benefiting from low energy intake during depletion). Likewise, restricting water and sodium have the potential to reduce muscle size and vascularity, and impair CHO loading, while overconsumption may lead to unwanted water retention which may obscure muscle definition and/or cause abdominal distension [ 1 ].
As physique competitors typically incur psychological distress close to competition [ 30 , 59 , 74 ] and given the proposed relationship between stress and water retention [ 1 ], stress management may be an overlooked area to improve performance. Thus, to minimise stress, establishing an individual response pattern and reducing the number of variables manipulated may benefit the competitor. Psychological distress may also be amplified by travel-related stressors, whereby competitors could travel earlier and become accustomed to the new environment and time zone (if applicable) to lessen the impact on performance. Mindfulness techniques which have been shown to moderately reduce stress in non-clinical populations (Hedges’ g = 0.55, p < 0.01) [ 145 ] may also be of interest to competitors; however, further research examining such techniques in the context of contest preparation and peak week is required to make concrete recommendations.
The manipulation of training variables should be considered when attempting to induce muscle glycogen supercompensation. As glycogenesis may be impaired by high degrees of muscle damage, training with high volumes, very close to failure, or performing exercises which place muscles at long lengths or under heavy eccentric loads should be avoided [ 136 – 143 ]. It is also advisable that competitors consume adequate energy predominantly from high-glycaemic index CHO with minimal fibre to maximise glycogenesis while minimising gastrointestinal distress [ 16 , 146 ]. Finally, as muscle glycogen levels remain stable for up to five days following supercompensation even with the cessation of exercise [ 49 , 50 ], ceasing resistance training and cardiovascular exercise during and after loading may help maximise and preserve intramuscular glycogen for competition.
As some divisions emphasise muscularity of certain muscle groups (i.e. upper body for physique and lower body for bikini competitors), preferential supercompensation of glycogen may be achieved in these muscle groups if they are depleted to a greater degree via resistance training. As the rate of glycogenesis is influenced by prior glycogen depletion and muscle contraction-stimulated translocation of glucose transporters [ 12 , 20 – 22 ], preferentially depleting muscle groups of interest may benefit certain competitors; however, further evidence is required to determine the effects on physique sport performance.
If feasible, it may be ideal for competitors to achieve the required level of conditioning three to four weeks prior to competition and slowly increase CHO intake. Such an approach might improve resistance training performance [ 144 ] while allowing time to adjust intakes based on physique changes (i.e. increasing CHO as much as possible without increasing ST). This approach may preclude the necessity of “last minute” or otherwise harmful, drastic nutritional changes such as dehydration or sodium restriction with potassium supplementation. Contrarily, consuming a concentrated bolus of sodium immediately prior to competition in conjunction with a pump-up routine may acutely enhance appearance in relevant divisions; nevertheless, this approach is speculative (as are many assertions in this area about best practice) and requires specific study. | Abbreviations
Carbohydrate
Dry weight
Muscle thickness
Subcutaneous tissue thickness
Intracellular water
Extracellular water
Bioelectrical impedance analysis
Bioelectrical impedance spectroscopy
Sodium–glucose cotransporter one
Glucose transporter
Acknowledgements
Not applicable.
Author contributions
KH and EH conceived and designed this review. KH performed database searches and compiled the relevant information from the included studies. KH wrote the first draft of the manuscript. All authors critically revised the manuscript and approved the final version of the manuscript.
Funding
No external sources of funding were used to conduct or prepare this review.
Availability of Data and Materials
Not applicable.
Declarations
Ethics Approval and Consent to Participate
Not applicable.
Consent for Publication
Not applicable.
Competing interests
Kai Homer, Matt Cross, and Eric Helms declare that they have no conflict of interest relevant to the content of this review. Eric Helms is a writer and coach in the bodybuilding and fitness industry. | CC BY | no | 2024-01-15 23:41:55 | Sports Med Open. 2024 Jan 13; 10:8 | oa_package/84/fd/PMC10787737.tar.gz |
|||
PMC10787738 | 38218856 | Introduction
Depression is a prevalent mental health disorder affecting millions worldwide 1 . Persistent sadness, hopelessness, and loss of interest in daily activities 2 characterize it. Despite multiple treatment options, including antidepressants, psychotherapy, and brain stimulation techniques, many depressed patients fail to achieve remission or experience adverse effects of medication 3 . Therefore, there is an urgent need to develop more effective and tolerable therapies for the treatment of depression 4 . Despite advances in chemical and synthetic drugs used to treat depression, substantial side effects and high costs are frequently reported in the treatment of depression 5 – 7 . There is evidence of an association between alterations in the immune system and depression, suggesting a potential mechanism for reducing neuroinflammation in treating depression 8 . Recent studies have suggested that neuroinflammation, or activation of immune cells in the brain, is a factor involved in the pathophysiology of depression 9 . This has led to an investigation of the use of anti-inflammatory drugs as potential antidepressants 10 . Notably, the chronic unpredictable mild stress (CUMS) model, as a prime example, has been widely recognized to mimic depression-like disorders in which animals are exposed to continuous unpredictable micro stress for prolonged periods 11 . This model develops representations of depression, such as stress-induced aphasia and apathy 12 . Therefore, the present study used CUMS to model depression-like disorders in mice 13 .
Longtooth lily (Lilium brownii var. viridulum), belonging to the genus Liliaceae, is often used as a functional component in traditional Chinese medicine and is widely distributed in Hunan and Jiangxi provinces of China 14 . Clinical research on the combination of Longya Lily and fluoxetine in the treatment of depression has been reported 15 , Current basic research generally believes that Longya Lily may exert pharmacological effects by regulating metabolism 16 , and fluoxetine treats depression by regulating GABA secretion and other treatments 17 . However, the molecular mechanism by which fluoxetine combined with Longya Lily exerts antidepressant effects is largely unknown. This study, through technical means such as cell and animal experiments, for the first time clarified that fluoxetine combined with Longya Lily exerts a therapeutic effect on depression by regulating inflammatory response through the COX-2/PGE2/IL-22 axis.
Evidence has been presented by Du et al. for the antidepressant activity of Longtooth lily, which confers a synergistic effect with albendoside that can modulate metabolic signaling 18 . Notably, a systematic review highlighted the clinical application of Lily of the Valley in treating depression, which points to the importance of elucidating the pharmacological basis of the mechanism 19 . As previously reported, gentian can modulate multiple miRNAs in depressed patients, affecting GABAergic synaptic and neurotrophic signaling, curbing neurotransmitter deficits, and attenuating inflammatory responses 20 . Fluoxetine is a selective 5-hydroxytryptamine reuptake inhibitor (SSRI) indicated for stimulating neurogenesis 21 . SSRIs are the most widely prescribed drugs for treating anxiety disorders, including depression 22 . In addition, fluoxetine has also been reported to improve depression-like behavior in CUMS rats 23 . In addition to being an antidepressant, fluoxetine has other therapeutic effects such as anxiety disorders, bulimia nervosa and premature ejaculation 24 . However, some adverse effects of SSRIs have been noted, such as nausea, nervousness, insomnia, headache, sexual dysfunction and hair loss (20) 25 . Therefore, synergistic drug and treatment goals are needed to prevent adverse events 26 .
Significant upregulation of cyclooxygenase-2 (COX-2) expression has been previously observed in the hippocampus of fluoxetine-resistant rats suffering from depression 27 . Inhibition of COX-2 exerts a neuroprotective effect on the dentate gyrus region, as evidenced by inhibition of neuroinflammatory responses and neuronal apoptosis, which is critical for the pathophysiology of depression 28 . Importantly, prostaglandin E2 (PGE2) constitutes a downstream factor of COX-2 29 . PGE2 is a critical inflammatory mediator involved in the pathophysiology of depression, and its concentration was increased in the brain and serum of rats receiving CUMS 30 . Also, published data suggest that PGE2 promotes T cells’ interleukin 22 (IL-22) production through its receptors EP2 and EP4 31 . A recent study found a significant positive correlation between IL-22 levels and depression 32 . In this study, the authors investigated the antidepressant effects and underlying mechanisms of a combination therapy that included the traditional herbal plant lobelia and fluoxetine, a commonly used antidepressant. They explored how this combination therapy could reduce neuroinflammation and provide superior antidepressant effects with fewer adverse effects than either agent. | Methods
Ethics statement
The current study was performed with the approval of the Ethics Committee of Youjiang Medical University for Nationalities and performed strictly with the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health.
Literature retrieval
From the inception to December 2020, PubMed, Embase, Web of Science, China National Knowledge Infrastructure (CNKI), and Wanfang databases were used to obtain relevant literature by retrieving depression, fluoxetine, Lilium, baihe, Lilii Bulbus; Lily Bulb, integrated Chinese and western medicine, and randomized controlled study. To screen the target documents accurately, we used the term NOT (review, animal experiment, meeting, Case report).
Literature screening and data extraction
The inclusion criteria were: (1) publicly published documents at home and abroad; (2) randomized controlled study; (3) patients with depression according to the diagnostic criteria for depression; (4) intervention measures including Longya Lilium combined with fluoxetine. The exclusion criteria were: (1) duplicated publications; (2) studies without sufficient data; (3) the included patients had taken antidepressants before. Two reviewers independently extracted information from the selected studies. Any disputes regarding data extraction were resolved by agreement among several investigators.
Meta-analysis
Meta-analysis was performed using R (v4.2.3), and the combined effect was evaluated using odds ratio (OR), weighted mean difference (WMD) and 95% confidence interval (CI). The Cochran Q test evaluated the heterogeneity test among studies. The P h < 0.05 indicated that there is heterogeneity. I 2 was used to evaluate the magnitude of heterogeneity. The I 2 value ranged from 0 to 100%. The larger the I 2 value was, the more pronounced the heterogeneity was. The p < 0.05 and I 2 > 50% indicated significant heterogeneity among the studies, which the random effect model further analyzed. Otherwise, the fixed-effect model was used for analysis. Sensitivity analysis: the method of elimination one by one was used to evaluate the stability of the results. Stata software was used to conduct the Egger test to evaluate whether the included literature has publication bias and to clarify the reliability of the original analysis results.
Drug and disease target screening of network pharmacology
Previous studies have shown that saponins are the main component responsible for the antidepressant effects of Bulbus Lilii, a traditional Chinese medicinal herb 81 . Hence, this study focuses on screening target genes directly related to saponins. Active ingredients and targets of Lilium saponins were retrieved from the Traditional Chinese Medicine Systems Pharmacology Database and Analysis Platform (TCMSP) database, and the targets related to the active ingredients of saponins were screened. The UniProtKB database retrieved the official gene symbol corresponding to the target, with the species set as “Homo sapiens”. Next, fluoxetine-related target genes were retrieved from the Comparative Toxicogenomics Database (CTD) database (Interaction Count ≥ 3). At the same time, the target genes related to depression were retrieved using the CTD database (Inference score ≥ 100). A Venn diagram of target genes related to Longya Lilium, fluoxetine, and depression was plotted using the Draw Venn Diagram tool.
Drug-component-target network construction and enrichment analysis
Cytoscape 3.6.0 software was applied to construct the Longya Lilium-fluoxetine-gene regulatory network. ClueGO plug-in in Cytoscape software was used to perform function enrichment analysis to construct a gene-function-pathway regulatory relationship network. Different functions were clustered according to different colors, and a pie chart of gene function enrichment was drawn to display the main Kyoto Encyclopedia of Genes and Genomes (KEGG) and gene ontology (GO) functions.
Visualization of single-cell sequencing data analysis results
The specific cell types involved in the regulation axis of COX-2/PGE2/IL-22 in brain tissue were retrieved from the scRNASeqDB database 82 , available at https://bioinfo.uth.edu/scrnaseqdb/ . The distribution of COX-2 (PTGS2) in various types of brain tissue cells is shown in the following link: https://bioinfo.uth.edu/scrnaseqdb/index.php?r=site/geneView&id=PTGS2&set=GSE67835&csrt=9002072548059807797 . Similarly, the distribution of PGE2 (PTGER2) in various types of brain tissue cells can be found at: https://bioinfo.uth.edu/scrnaseqdb/index.php?r=site/geneView&id=PTGER2&set=GSE67835&csrt=9002072548059807797 . Lastly, the distribution of IL-22 (IL22) in different types of brain tissue cells can be viewed here: https://bioinfo.uth.edu/scrnaseqdb/index.php?r=site/geneView&id=IL22&set=GSE67835&csrt=9002072548059807797 .
Depression mouse model establishment
120 SPF C57BL/6J male mice (aged 9–12 weeks old, weighing 18–21 g) were used to develop a model of depression based on the CUMS method. The stress paradigm consisted of the following stressors: swimming in ice water (5 min), food and water deprivation (24 h for each), tail pinch (1 min), shaking (once/s, 15 min), reversal of day and night and bondage (5 min each time) for a total of 24 days. One of these stressors was randomly arranged every day, and each stimulus was performed 21 times so that the mice could not predict the occurrence of the stimulus. Ten mice were used as normal controls, and 110 were stimulated to establish a depression model. After 5 weeks, the behavioral test was performed, and the serum sample was collected for biochemical testing.
The overall grouping was as follows: Control group (normal mice, no treatment), Model group (model mice, intravenous injection of empty vector), Fluoxetine group (model mice, oral gavage of Fluoxetine at a dose of 20 mg/kg), Lilium saponins group (model mice, oral gavage of Lilium saponins at a dose of 50 mg/kg), Lilium saponins+Fluoxetine group (model mice, oral gavage of Lilium saponins at a dose of 25 mg/kg and Fluoxetine at a dose of 10 mg/kg). The gavage volume was 0.3 mL, administered once daily for a continuous period of 21 days. In this study, we used Lilium saponins (Shaanxi Feiste Biotechnology Co., Ltd., product code: 904) to treat the animals without using the whole or parts of Lilium brownii. This study employed lily saponins (Xi’an Feist Biotechnology Co., Ltd., product number 904) for animal treatment, without using whole or partial dragon tooth lilies. The concentration of Lilium saponins was determined based on the literature reference 83 , while the concentration of Fluoxetine was determined based on the literature reference 84 .
Animal experiments were conducted with the approval of our institution’s Animal Ethics Committee and in compliance with the guidelines for animal experimentation provided by the National Institutes of Health (NIH). Specifically, mice were anesthetized by intraperitoneal injection of sodium pentobarbital (60 mg/kg) while closely monitoring vital signs, including respiration, heart rate, and blood pressure, to ensure optimal anesthesia. When obtaining mouse brain tissue, humane euthanasia procedures were strictly followed, employing an overdose anesthesia method (2–4 times the typical anesthetic dosage) to ensure the mice’s gentle death. Prior to performing the procedures, it was ensured that the animal euthanasia equipment was in good working condition. Furthermore, operators underwent training to identify signs of pain, fear, or distress in experimental animals, avoiding any startling noises that could cause fear. After the procedure, operators were capable of determining mouse death in accordance with humane practices. Proper handling and disposal of animal remains were also ensured to prevent any environmental impact. All individuals involved in euthanasia procedures demonstrated sensitivity toward the value of animal life and adhered to scientific, rational, and ethical principles to ensure painless and fear-free death. It is essential to respect the dignity and worth of the experimental animals, rather than viewing them merely as research materials or tools 85 – 87 .
Plasmid construction, lentivirus transfection, and grouping
BV-2 (ATL03001) purchased from National Infrastructure of Cell Line Resource (Beijing, China) were cultured in DMEM (SH30022.01B, Shanghai Suer Biotechnology Co., Ltd., Shanghai, China) supplemented with 10% FBS (SH30070.03, HyClone Laboratories, Logan, UT), 1% penicillin and streptomycin in a 5% CO 2 incubator at 37 °C. Three siRNA sequences were designed for mouse-derived COX-2 CDS and IL-22 CDS and synthesized by Guangzhou RiboBio Co., Ltd. (Guangzhou, Guangdong, China). BV-2 cells were plated in a 24-well plate at 1 × 10 5 cells/healthy density and cultured with 1 mL DMEM at 37 °C with 5% CO 2 for 24 h. Lipofectamine 2000 reagent (#11668027, Invitrogen) was used to transduce 50 nM negative control (NC), COX-2 small interfering RNA (siRNA, si), or IL-22 siRNA into BV-2 cells for 48 h. The silencing efficiency of COX-2 and IL-22 in BV-2 cells was detected by Western blot.
The lentivirus packaging system was constructed by LV5-GFP (lentivirus gene overexpression vector) and pSIH1-H1-copGFP (lentivirus short hairpin RNA (sh RNA) fluorescent expression vector gene silencing vector). sh-COX-2, sh-IL-22, overexpression (oe)-COX-2, oe-IL-22, sh-NC, and oe-NC were completed by Shanghai GenePharma Co. Ltd. (Shanghai, China). BV-2 cells were co-transfected with packaged lentivirus and target vectors. After incubation for 48, cell supernatant was collected. The lentivirus particles in the supernatant after centrifugation were filtered to detect the lentivirus titer. G418 was used to screen the stably transfected cells for over 2 weeks, and the mRNA or protein levels were identified by reverse transcription-quantitative polymerase chain reaction (RT-qPCR) or Western blot analysis.
The CUMS-induced mice were intravenously injected with lentivirus (10 μL) harboring sh-NC, sh-COX-2, sh-IL-22, Longya Lilium + fluoxetine + oe-NC, Longya Lilium + fluoxetine + oe-COX-2 and Longya Lilium + fluoxetine + oe-IL-22 with a final titer of 1 × 10 9 TU/mL. One day before CUMS, body weight, depression-like behaviors, and sucrose consumption were evaluated. On the second day, the mice underwent CUMS, and at the same time, were injected with lentiviruses and administrated with drugs via the tail vein. After 28 days, body weight, depression-like behaviors and sucrose consumption were assessed. The mice were anesthetized with sodium pentobarbital and euthanized; after that, the blood and brain tissue were collected, and the brain tissue was stored at −80 °C.
Behavioral assessments
In this study, the body weight of mice was measured on the day before the experiment and again on the 28th day. The weight gain of each group of mice was calculated by subtracting the weight on the first day of the experiment from the weight on the 28th day.
An opaque cube measuring 80 cm × 80 cm × 40 cm was constructed for the open field test with 25 equal-sized squares marked on the floor. Mice were placed in the center square, and their activity was recorded for 3 min. Both horizontal and vertical activity were recorded, with horizontal activity being measured as the number of squares crossed and vertical activity as the number of times both forepaws left the ground. Each animal was tested only once, and to ensure the reliability of the experiment, a double-masked procedure was employed whereby different researchers monitored the activity without knowledge of the group assignment of the mice 88 .
In the forced swim test, mice were placed individually in a water tank (80 cm high, 30 cm in diameter, and at a temperature of 25 °C) and forced to swim for 6 min. After 24 h, the mice were placed in the same tank for 5 min, and their swimming and immobility times were recorded for 3 min. Immobility time was defined as floating with minimal movement to keep the head above water 36 .
In the tail suspension test, mice were subjected to a series of parameters while hanging by their tails for 360 s with their heads facing downwards, with the tail being fixed to support 1.5–2 cm from the base. The immobility time was recorded as the time the mouse spent motionless while in this state. The measurement was taken using a single-masked procedure 89 .
In the sucrose preference test, mice were individually housed in a cage containing two bottles of sucrose solution (1%, w/v) for 24 h during an adaptation stage. During the second 24-h period, one bottle of sucrose solution was replaced with tap water. In the test phase, mice were deprived of food and water for 24 h and then placed in a cage containing two bottles, one containing 100 mL of 1% sucrose solution and the other 100 mL of tap water, for 3 h. Sucrose preference was defined as the sucrose consumption divided by the total consumption (sucrose solution and water) multiplied by 100%. Eight mice were used for each group 36 .
Hematoxylin-eosin (HE) staining
Paraffin-embedded sections of hippocampal tissues were prepared and cut into 4-μm-thick sections. The sections were stained with hematoxylin (C0007, Baoman Biotechnology Co., Ltd., Shanghai, China) for 10 min at room temperature. Next, the sections were counterstained with eosin at room temperature for 5–10 min. Before observation, the samples were mounted with neutral gum under an optical microscope (XSP-36, Boshida Optical Instrument Co., Ltd., Shenzhen, China).
Immunohistochemistry
The tissue sections were incubated with 3% H 2 O 2 (84885, Sigma-Aldrich, St Louis, MO) at 37 °C for 30 min and boiled in 0.01 M citrate buffer at 95 °C for 20 min. The sections were then blocked with normal goat serum at 37 °C for 10 min and probed with primary antibody rabbit anti-mouse COX-2 (1:100, ab15191) at 4 °C overnight. The following day, the sections were re-probed with secondary antibody goat anti-rabbit immunoglobulin (IgG; 1:1000, ab6721, Abcam Inc., Cambridge, UK) at room temperature for 30 min, developed with diaminobenzidine (DAB; ab64238, Abcam), counterstained with hematoxylin and mounted. The primary antibody was substituted with PBS as NC. Five high-power fields were randomly selected from each section, with 100 cells counted in each field, and the rate of positive cells was calculated.
Enzyme-linked immunosorbent assay (ELISA)
Levels of IL-1β, IL-6, tumor necrosis factor-α (TNF-α) and interferon-gamma (IFN-γ) in peripheral blood of mice were measured using the IL-1β (ab197742, Abcam), IL-6 (ab222503, Abcam), TNF-α (ab208348, Abcam) and IFN-γ (ab100689, Abcam) ELISA kits. The optical density (OD) value was measured at 450 nm.
Immunofluorescence staining
The mouse hippocampal tissue sections were fixed with 4% paraformaldehyde, permeabilized in 0.3% Triton X-100, and blocked with 1% bovine serum albumin. After that, the sections were probed with primary antibody Iba1 (A12391, 1:100, Abclonal Technology, Inc.) overnight at 4 °C. Following PBS washing, the sections were re-probed with corresponding fluorescent secondary antibodies against AS011 and AS007 (1:100, Abclonal). Finally, the sections were observed under a confocal laser scanning microscope (FluoView FV10i, Olympus Optical Co., Ltd, Tokyo, Japan). Five sections were randomly selected from each mouse, and three visual fields were randomly selected from each section for a photograph. ImageJ software (National Institutes of Health, Bethesda, Maryland) was used for fluorescence intensity analysis.
TUNEL assay
TUNEL kit (Boehringer Mannheim, Germany) was used for TUNEL staining as the instructions described. The 20-μm-thick tissue cross-sections were prepared in a constant cold box cutter at −22 °C, and 10 sections were taken from each group for TUNEL staining. Under an optical microscope (Leica DM4 P, Shanghai Meijing Electronics Co., Ltd., Shanghai, China), the dark particles indicated apoptotic cells. Five high-power fields were randomly selected from each section for observation and photographing, and 100 positive stained cells were calculated.
RNA isolation and quantitation
Total RNA was extracted with RNA Extraction Kit (D203-01, GenStar Biosolutions Co., Ltd., Beijing, China) and then reversely transcribed as per the instructions of TaqMan MicroRNA Assays Reverse Transcription Primer (4366596, Thermo Fisher Scientific Inc., Waltham, MA). RT-qPCR was conducted using SYBR ® Premix Ex Taq TM II kit (RR820A, Action-award Biological Technology Co., Ltd, Guangzhou, China) on the ABI PRISM ® 7300 system (Prism ® 7300, Shanghai Kunke Instrument Equipment Co., Ltd., Shanghai, China). Takara Biotechnology Ltd. (Dalian, China) designed and synthesized the primers, with sequences shown in Supplementary Table 3 . The fold changes were calculated using relative quantification (the 2 −ΔΔCt method), with GAPDH as a loading control.
Western blot analysis
Total protein was extracted from tissues with tissue lysis buffer containing phenylmethylsulphonyl fluoride (PMSF) (Boster Biological Technology Co., Ltd., Wuhan, China). The concentration of the total protein was determined by a bicinchoninic acid (BCA) kit (20201ES76, YEASEN Biotechnology Co., Ltd., Shanghai, China). The protein was separated by SDS-PAGE using 10% separation gel and spacer gel and transferred onto the nitrocellulose membrane. The membrane was then blocked using 5% skimmed milk powder at 4 °C overnight and underwent overnight incubation at 4 °C with the diluted primary rabbit antibodies (Abcam) against COX-2 (1:500, ab62331, Abcam,), PGE2 (1:200, #PA5-77694, Thermo Fisher Scientific Inc., Waltham, MA), IL-22 (1:1000, #PA5-115408, Thermo Fisher Scientific), TNF-α (1:1000, ab215188, Abcam), and IFN-γ (1:100, ab24780). The next day, the immunocomplexes on the membrane were visualized using enhanced chemiluminescence (ECL) reagent (Pierce Biotechnology Inc., Rockford, IL) at room temperature for 1 min and band intensities were quantified using ImageJ software, with GAPDH serving as a loading control. All original western blot images can be found in Supplementary Figs. 3 – 25 .
Cell culture and grouping
BV-2 cells were cultured with high-glucose DMEM containing 10% FBS, 100 units/mL penicillin, and 100 μg/mL streptomycin in a 5% CO 2 incubator at 37 °C, with the medium changed every 24 h. Cells were passaged after 48 h. After recovery, the growth status of BV-2 cells in the culture flask was observed under a microscope. When cells reached about 80–90% confluence, the medium was discarded, and 5 mL 0.01 M PBS was used to wash the culture flask twice to remove the residual medium. Next, 1 mL of 0.25% trypsin was added to the culture flask to cover the cells. The digestion was halted when the gap between the BV-2 cells under a microscope increased. The cells were gently pipetted until complete suspension and then passaged for another culture.
BV-2 cell suspension was seeded in a 96-well culture plate at a density of 1 × 10 4 cells/well (100 μL) and incubated in a cell incubator for 12 h to allow cell adherence to the wall. Cells were treated with PBS and served as the control. After 1 h, the cells were administered 1 mg/L lipopolysaccharide (LPS) for 24 h and further treated with Longya Lilium (10 μM) + fluoxetine (15 μM), and lentivirus containing si-NC, si-COX-2, si-COX-2 + oe-PGE2, oe-NC, oe-PGE2, and oe-PGE2 + si-IL-22.
MTT assay
Cells were seeded in a 96-well plate at a density of 5 × 10 4 cells/well, with 6 parallel wells set in each group. After reoxygenation, 20 μL of MTT solution (Sigma-Aldrich) was added to the cells and incubated in a 37 °C incubator for 4 h. Each well was supplemented with 150 μL of methyl sulfoxide (Sigma-Aldrich. Afterward, each well’s optical density (OD) value was measured at 450 nm using a microplate reader.
Flow cytometry
A flow cytometer assessed the cell apoptosis after 48 h of transfection. Following the instructions of Annexin-V-FITC Cell Apoptosis Detection Kit (CA1020, Beijing Solarbio Science & Technology Co., Ltd., Beijing, China), Annexin-V-FITC, PI, and HEPES buffer were prepared into Annexin-V-FITC/PI dye solution at a ratio of 1:2:50. A total of 1 × 10 6 cells were resuspended per 100 μL dye solution.
Statistical analysis
Statistical analysis was performed using SPSS 21.0 (IBM Corp., Armonk, NY). The measurement data were described as mean ± standard deviation. Data between the two groups were compared using an unpaired t -test. Data among multiple groups were assessed by one-way analysis of variance (ANOVA) or two-way ANOVA, followed by Tukey’s post hoc tests with corrections for multiple comparisons. A value of p < 0.05 was statistically significant.
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article. | Results
Longya Lilium combined with fluoxetine has a more significant antidepressant effect and fewer adverse reactions
A total of 462 related documents were retrieved. After screening, 8 documents met the criteria and were finally included. All of them were randomized controlled studies. A total of 736 cases of depression were included, consisting of 365 cases in the fluoxetine group and 371 cases in the Longya Lilium + fluoxetine group. A schematic diagram of the literature screening is shown in Supplementary Fig. 1 , and the baseline characteristics of the included studies are presented in Supplementary Table 1 .
Heterogeneity test results show that based on Hamilton Depression Scale (HAMD) score, there was heterogeneity among the studies ( I 2 = 91% and P h < 0.01), and the random effect model was used. In terms of total effective rate, due to no heterogeneity among studies ( I 2 = 0% and P h = 0.77), the fixed-effect model was adopted. In terms of the incidence of adverse reactions, there was heterogeneity among studies ( I 2 = 65.0% and P h < 0.01), and the random effect model was conducted. The main results of the meta-analysis showed that compared with the fluoxetine group, the HAMD score in the fluoxetine + Longya Lilium group was decreased more significantly (SMD = 1.30, 95%CI = 0.25–2.36), the total effective rate was higher (OR = 5.18, 95%CI = 3.25–8.24), and adverse reactions were fewer (OR = 0.34, 95%CI = 0.13– 0.88) (Fig. 1A–C ).
Meta-regression results indicated that the three indicators of HAMD score ( p = 0.834), total effective rate ( p = 0.747), and adverse reactions ( p = 0.364) were all p > 0.05, suggesting that the publication period may not be related to the heterogeneity between studies. Further sensitivity analysis presented that the SMD or OR value did not change much, which demonstrated that the results of the Meta-analysis were reliable (Fig. 1D–F ). In addition, Egger test results showed that the three indicators of HAMD score, total effective rate, and adverse reactions were all scattered in the funnel, indicating a reduced risk of publication bias and strong credibility of the article (Fig. 1G–I ).
In summary, compared with treatment with fluoxetine alone, the antidepressant effect of Longya Lilium combined with fluoxetine may be more significant, accompanied by fewer adverse reactions.
Longya Lilium, combined with fluoxetine, exerts an antidepressant therapeutic effect by downregulating the expression of COX-2
Next, the key targets of Longya Lilium combined with fluoxetine in the occurrence of depression through the network pharmacology database. Following retrieval of the CTD and TCMSP databases, 93 fluoxetine-related target genes, 447 depression-related target genes, and 33 saponin-related targets were obtained, respectively. Venn diagram analysis of these predicted targets showed NR3C2, SLC6A4, PTGS2 (COX-2), BCL2, KCNH2 and CASP3 at the intersection (Fig. 2A , Supplementary Table 2 ).
Figure 2B illustrates the network of Longya Lilium-fluoxetine-target constructed by Cytoscape software, and further functional enrichment analysis using the ClueGO plug-in suggested that the 6 genes were mainly enriched in intrinsic apoptotic signaling pathway in response to osmotic stress (GO:0008627), negative regulation of synaptic transmission, dopaminergic (GO:0032227) and Small cell lung cancer (KEGG:05222), of which COX-2 was at the core, and also enriched in the above functions (Fig. 2C, D ). Meanwhile, published literature has shown that the mRNA expression of COX-2 in the peripheral blood of patients with depression was significantly higher than that of healthy individuals 33 . Both Lily and fluoxetine can inhibit the expression of COX-2 34 , 35 . Therefore, we selected COX-2 as the vital candidate gene.
These findings suggested that Longya Lilium combined with fluoxetine regulated the expression of COX-2 to exert the antidepressant therapeutic effect.
Longya Lilium combined with fluoxetine significantly improves depression-like behaviors in mice
To elucidate the antidepressant effect of Longya Lilium combined with fluoxetine, we used the CUMS method to construct a mouse model of depression and carried out weight statistics, OFT, FST, TST, and sucrose preference tests. The obtained results showed that on the 28th day of the experiment, the CUMS mice exhibited a decline in body weight, the total distance of motion, and sucrose preference, yet an increase in the immobility duration of swimming and tail suspension (Fig. 3A–E ). Treatment with fluoxetine, Longya Lilium and Longya Lilium + fluoxetine significantly improved the depression in modeled mice. Relative to fluoxetine or Longya Lilium treatment alone, combined treatment with Longya Lilium and fluoxetine presented noticeable improvement (Fig. 3A–E ). The above results showed that the antidepressant effect of the combination of Longya Lilium and fluoxetine was more excellent than that of individual Longya Lilium or fluoxetine.
Longya Lilium combined with fluoxetine alleviates depression and reduces neuroinflammatory response by inhibiting the expression of COX-2
We then aimed to determine the mechanism of Longya Lilium combined with fluoxetine reducing depression and neuroinflammatory response. The results of TST and sucrose preference test showed an increase in the immobility duration of tail suspension and a decrease in sucrose preference in CUMS mice, while an opposite result was noted in the presence of sh-COX-2 or Longya Lilium + fluoxetine + oe-NC. In addition, treatment with Longya Lilium + fluoxetine + oe-COX-2 led to increased immobility duration of the tail suspension and decreased sucrose preference (Fig. 4A, B ).
The immunohistochemistry results demonstrated that the positive expression of COX-2 was increased in the hippocampal CA1 region of CUMS mice and mainly present in the pyramidal cells. Treatment with sh-COX-2 or Longya Lilium + fluoxetine + oe-NC reduced the COX-2 positive expression, which was negated in response to Longya Lilium + fluoxetine + oe-COX-2. In the CA3 and DG regions, there was no difference in the positive expression of COX-2 upon each treatment (Fig. 4C ).
In addition, HE staining data exhibited that control mice had more cells in the hippocampus, with compact cell hierarchy, standard size, complete structure, few amoebocytes, and dark stained nuclei. Conversely, the CUMS mice showed incomplete hippocampal cell morphology, nucleus pyknosis, enlarged intercellular space, and disordered arrangement. In the presence of sh-COX-2 or Longya Lilium + fluoxetine + oe-NC, nerve cell morphology was relatively complete, the staining and the layers between cells were clear, and amoebocytes were reduced. In contrast, further COX-2 overexpression resulted in more messy and loose cell arrangement, incomplete cell morphology, more amoebocytes and light staining (Fig. 4D ).
In addition, Western blot analysis showed that expression of inflammatory factors IL1β, IL-6, TNF-α, and IFN-γ in the hippocampal CA1 region of CUMS mice increased as that of control mice. Following treatment with sh-COX-2 or Longya Lilium + fluoxetine + oe-NC, the expression of these factors was reduced compared with control mice, while the expression of these factors was elevated after further COX-2 overexpression (Fig. 4E ). Moreover, ELISA data also revealed higher levels of inflammatory factors IL1β, IL-6, TNF-α, and IFN-γ in the peripheral blood of CUMS mice than those of control mice. The levels of these inflammatory factors were decreased in the peripheral blood of CUMS mice treated with sh-COX-2 or Longya Lilium + fluoxetine + oe-NC relative to CUMS mice, while these factors were increased following additional overexpression of COX-2 in the peripheral blood of CUMS mice (Fig. 4F ).
In summary, overexpression of COX-2 may partially reverse the antidepressant effect of Longya Lilium combined with fluoxetine through its pro-inflammatory effect.
COX-2 activates microglial cells and promotes inflammation through the PGE2/IL-22 axis
Neuroinflammatory response is a significant risk factor in the pathophysiology of depression. Research has shown that COX-2 promotes the production of pro-inflammatory prostaglandin E2 (PGE2), and enhanced PGE2 pathway activity effectively induces the core symptoms of depression 36 . Previous studies have indicated that PGE2 is an essential downstream factor of COX-2 37 and plays a crucial role in the neuroprotective effects of SY5Y cells 38 . Therefore, it is hypothesized that Dragon Tooth Lily and Fluoxetine may affect depression through the COX-2/PGE2 pathway. In addition, IL-22 is an essential inflammatory factor regulated by PGE2 in immune cells 39 , and PGE2 promotes IL-22 production in T cells, thereby contributing to the development of allergic contact dermatitis 40 . Moreover, levels of IL-22 are positively associated with depression, and inhibiting IL-22 expression improves depressive behavior in mice models of colorectal cancer comorbid with depression 41 , 42 . Furthermore, analysis of the scRNASeqDB database reveals that COX-2 is mainly distributed in cortex microglia and hippocampus microglia, while PGE2 and IL-22 are predominantly found in endothelial cells and Oligodendrocyte progenitor cells (OPC), with microglia also exhibiting specific distribution patterns (Supplementary Fig. 2 ). Therefore, it is speculated that COX-2 may participate in the inflammatory response and contribute to the progression of depression through the PGE2/IL-22 axis, activating microglia in particular.
To further explore whether COX-2-activated microglia-mediated inflammatory response is involved in depression through the PGE2/IL-22 axis, LPS-induced BV-2 cell activation was used to simulate the in vitro neuroinflammatory response model. Western blot analysis data showed that protein expression of COX-2, PGE2, IL-22, TNF-α, and IFN-γ in the LPS cells was higher than in the control cells. In comparison to LPS-induced BV-2 cells, protein expression of COX-2, PGE2, IL-22, TNF-α, and IFN-γ was reduced in LPS-induced BV-2 cells treated with Longya Lilium and fluoxetine (Fig. 5A ).
Moreover, ELISA results showed that silencing of COX-2 reduced the expression of COX-2, PGE2, IL-22, TNF-α, and IFN-γ in LPS-induced BV-2 cells. By comparison with LPS-induced BV-2 cells transduced with si-COX-2, the expression of COX-2, PGE2, IL-22, TNF-α, and IFN-γ in LPS-induced BV-2 cells was elevated in LPS-induced BV-2 cells transduced with si-COX-2 + oe-PGE. Overexpression of PGE2 elevated expression of COX-2, PGE2, IL-22, TNF-α, and IFN-γin LPS-induced BV-2 cells, but the expression of COX-2, PGE2, IL-22, TNF-α, and IFN-γ was reduced in LPS-induced BV-2 cells following additional silencing of IL-22 relative to overexpression of PGE2 alone (Fig. 5B ).
The results of MTT and flow cytometry suggested that the viability of the si-COX-2-treated cells was enhanced, but apoptosis was reduced, which was negated by dual treatment with si-COX-2 and oe-PGE2. In addition, cell viability was attenuated, and apoptosis was enhanced in the presence of overexpression of PGE2, the effect of which was abolished by further silencing of IL-22 (Fig. 5C, D ).
The above results suggested that COX-2 activated microglial cells to promote inflammation by activating the PGE2/IL-22 axis.
Longya Lilium combined with fluoxetine inhibits neuroinflammatory response in mice with depression by suppressing the COX-2/PGE2/IL-22 axis
Finally, we sought to identify whether Longya Lilium combined with fluoxetine can improve the neuroinflammatory response in mice with depression by inhibiting the COX-2/PGE2/IL-22 axis. The results of the TST and sucrose preference test demonstrated an increase in the immobility duration of tail suspension of CUMS mice and a decline in the sucrose preference. Conversely, silencing of IL-22 caused opposite results. In the presence of Longya Lilium + fluoxetine + oe-IL-22, the immobility duration of tail suspension was prolonged whereas the sucrose preference was weakened (Fig. 6A, B ).
ELISA data suggested an enhancement in the levels of IL1β, IL-6, TNF-α, and IFN-γ in the peripheral blood of CUMS mice compared with control mice, while in comparison to CUMS mice, these levels reduced in the peripheral blood of CUMS mice treated with silencing of IL-22. In addition, treatment with Longya Lilium + fluoxetine + oe-IL-22 elevated the levels of IL1β, IL-6, TNF-α, and IFN-γ in the peripheral blood of CUMS mice relative to CUMS mice treated with Longya Lilium + fluoxetine + oe-NC (Fig. 6C ).
RT-qPCR and Western blot analysis results indicated that expression of COX-2, PGE2, and IL-22 in the hippocampal CA1 region of CUMS mice was higher than that of control mice. Compared with CUMS mice, treatment with sh-IL-22, the expression of IL-22 reduced in the hippocampal CA1 region of CUMS mice, while there is a n-fold increase in IL-22 expression in the hippocampal CA1 region of CUMS mice treated with Longya Lilium + fluoxetine + oe-IL-22 relative to CUMS mice treated with Longya Lilium + fluoxetine + oe-NC (Fig. 6D, E ).
Moreover, immunofluorescence staining results showed that the number of Iba1 positive cells was increased in the hippocampal CA1 region of CUMS mice, while it was decreased in the absence of IL-22. Compared with Longya Lilium + fluoxetine + oe-NC, more Iba1-positive cells were noted in the presence of Longya Lilium + fluoxetine + oe-IL-22 (Fig. 6F ).
TUNEL staining data revealed an upward inclination in the number of apoptotic cells in the hippocampal CA1 region of CUMS mice but silencing of IL-22 reduced the apoptotic cells. Relative to treatment with Longya Lilium + fluoxetine + oe-NC, the number of apoptotic cells was higher in response to treatment with Longya Lilium + fluoxetine + oe-IL-22 (Fig. 6G ).
In summary, silencing IL-22 could inhibit the inflammatory response and reduce depression-like behaviors in mice. At the same time, overexpression of IL-22 may partially abrogate the antidepressant effect of Longya Lilium combined with fluoxetine through its pro-inflammatory effect. | Discussion
In this study, the literature screening process was employed to define the research question and scope, ensuring the inclusion of high-quality studies to guarantee accurate analysis results. In addition, it aimed to enhance the homogeneity of the research, enabling a more precise evaluation of result consistency and credibility while reducing bias risk and improving the trustworthiness of evidence, thereby making the research findings more practical.
It is essential to investigate the molecular mechanisms underlying the antidepressant effects of combination therapy with gentian and fluoxetine. The present study proves that combination therapy has superior therapeutic effects on depression-like behavior and neuroinflammation compared to either agent alone. However, the specific biological pathways involved are yet to be understood 43 fully. Understanding the molecular mechanisms responsible for the effectiveness of this combination could provide insights to optimize this therapy 44 . Another reason is the urgent need to find alternative therapies to overcome the limitations of the current treatment for depression 45 . Molecular studies on the potential of Longya Lilium as a therapeutic option could open new areas of development for effective treatment modalities 46 . Finally, the potential application of this study goes beyond the treatment of depression, as it could provide essential insights into the underlying molecular mechanisms of natural compounds that improve mental health 47 . Thus, this investigation may be critical for developing new therapeutic options to improve the quality of life of depressed patients and advance mental health research.
Combined Chinese and Western medicine therapies have been widely used in human diseases due to their superior efficacy 48 . Our meta-analysis showed that gentian herb combined with fluoxetine had more pronounced antidepressant effects and fewer adverse effects 49 . Oral administration of Ziziphi spinosae lily powder suspension exhibited a specific antidepressant-like effect due to a reduction in tail suspension and swimming immobility time 50 . The results of a previous study showed that gentianoside, in combination with bailey saponin, had better antidepressant activity compared to bailey saponin or gentianoside alone in a rat CUMS model 51 . Fluoxetine, the first selective 5-hydroxytryptamine reuptake inhibitor, has been approved for the clinical treatment of depression because it improves depression-like behavior and monoamine neurotransmitter levels 52 . In addition, fluoxetine can effectively alleviate CUMS-induced depression-like behavior in mice by modulating the expression of lncRNAs in the hippocampus 53 . There is growing evidence that fluoxetine combination therapy has better antidepressant effects than treatment alone; for example, the standardized commercial ginseng extract G115® significantly enhanced the antidepressant-like effects of fluoxetine in FST 54 , 55 . Combined treatment with fluoxetine and 7,8-dihydroxyflavone significantly inhibited sucrose preference and depression-like behavior in FST, accompanied by a significant increase in autophagy, neuronal nuclei and Iba1 expression 56 .
In the current study, subsequent network pharmacological findings suggest that the antidepressant therapeutic effect of gentiana combined with fluoxetine is associated with inhibition of COX-2 expression 57 . Consistently, pharmacological inhibition of COX-2 showed promise in preventing increases in anxiety-like behaviors induced by acute stress 58 . COX-2 is associated with the inflammatory response, and its decreased levels indicate a marked suppression of the inflammatory response 59 . Neuroinflammation is a significant risk factor for the pathophysiology of depression 60 . reduced COX-2 expression in the hippocampus contributes to the amelioration of depression-like behaviors and hippocampal neuroinflammation induced by the stress of chronic social defeat 61 . Together, these data reveal the potential of COX-2 inhibition as a target for the treatment of depression because of its inhibitory effect on neuroinflammation 62 . The combination of gentian and fluoxetine represents a promising alternative to traditional monotherapy for depression 63 . The current investigation highlights the potential advantages of this therapeutic combination over either agent alone because of its ability to significantly reduce depression-like behavior and neuroinflammation in the preclinical setting 64 . This study also provides valuable insight into the underlying molecular mechanisms involved in this combination therapy 65 .
Previous research has demonstrated the significant role of PGE2 as a downstream factor of COX-2 37 . Moreover, it has been shown to play a crucial role in the neuroprotective effect observed in SY5Y cells 38 . Hence, we hypothesize that Longya Lily and fluoxetine may exert their effects through the COX-2/PGE2 pathway in depression. In addition, IL22 is a pivotal inflammatory factor regulated by PGE2 in immune cells 39 . Therefore, we selected the PGE2/IL22 axis as the downstream signaling pathway for this study. Specifically, this therapy appears to reduce COX-2 expression and inactivate the PGE2/IL-22 pathway, leading to reduced microglial activation, inflammation, and neuronal damage 66 . The importance of this combination therapy lies in its ability to provide more effective and tolerable treatment options for depressed patients and improve their quality of life 67 . In addition, the study’s focus on natural compounds, such as gentian, highlights the potential for alternative therapies derived from traditional medicinal sources 68 . The potential applications of such integrative therapies are numerous and reflect the growing need for novel and effective treatments in the mental health field 69 . Ultimately, this investigation represents an important step toward developing new treatment strategies for depression 70 .
Furthermore, further mechanistic investigations in the present study suggest that COX-2 activates microglia, thereby inducing neuroinflammation by activating the PGE2/IL-22 axis 66 . Indeed, COX-2 produces the pro-inflammatory factor PGE2 and the subsequent glial cell activation, leading to the core symptoms of depression in a rat model 71 . The synthesis of PGE2 is induced in the occurrence of neuroinflammation-associated depression 72 . Also, PGE2 mRNA expression has been elevated in CUMS-induced rat hippocampus and frontal cortex, and its inhibition can help ameliorate CUMS-induced depression-like behavior 73 . A previous study showed that PGE2 acts directly on type 3 innate lymphocytes and promotes their production IL-22 74 . Recently, paroxetine was reported to disrupt depression-like behavior by downregulating IL-22 expression in combination with chemotherapeutic agents 75 .
Interestingly, both lily and fluoxetine inhibited COX-2 expression 76 . Therefore, it can be concluded that the combination of Lobelia and fluoxetine significantly inhibited the neuroinflammatory response and ameliorated the resulting depression by inhibiting the COX-2/PGE2/IL-22 axis 77 . The COX-2/PGE2/IL-22 axis is an essential inflammatory pathway implicated in the pathogenesis of several neuropsychiatric disorders, including depression 78 . The current investigation provides essential insights into the role of this pathway in the antidepressant effects of gentian and fluoxetine combination therapy 79 . Studies have shown that this combination reduces COX-2 expression, thereby reducing microglia activation, inflammation and neuronal apoptosis through inactivation of the PGE2/IL-22 pathway 78 . This is important because the COX-2/PGE2/IL-22 axis is known to contribute to inflammation-associated neuronal damage and to promote the production of other pro-inflammatory mediators involved in the pathophysiology of depression. Therefore, understanding the molecular mechanisms underlying the activity of this inflammatory pathway in depression is crucial for the development of more effective treatments to reduce neuroinflammation and alleviate psychiatric symptoms 80 . The results of this study represent an essential contribution to this emerging field of research and may be vital in paving the way for novel therapeutic strategies targeting the COX-2/PGE2/IL-22 axis in depression and other neuropsychiatric disorders.
Overall, the present study confirmed the antidepressant effect of the combination of gentian and fluoxetine. This effect may be related to the inhibition of the COX-2/PGE2/IL-22 axis (Fig. 7 ). The results of this study suggest that the combination of gentian herb and fluoxetine may be a potential alternative antidepressant treatment with fewer side effects than conventional monotherapy. Further studies are needed to elucidate the optimal dose and duration of this agent and examine the safety, tolerability, and efficacy of this combination in humans. Clinical trials could use a combination of behavioral and neuroimaging measures to understand better the effects of Longya Lilium and fluoxetine on the brain and clinical symptoms of depression. At the same time, the current study provides some insight into the underlying mechanisms of this combination therapy; the involvement of multiple pathways and components of the immune system warrants further investigation. In addition, studies evaluating the effects of Gentiana and fluoxetine in different populations, such as those with treatment-resistant or severe forms of depression, are needed to determine the potential efficacy of this combination in more challenging situations. However, these findings provide promising evidence for developing new treatment strategies to address the ongoing challenges in managing depression.
Although this study has explored in-depth the mechanisms of the combination of Longya Lily and Fluoxetine in the treatment of depression from multiple perspectives, there are still some limitations. Firstly, our study primarily focuses on the role of the COX-2/PGE2/IL-22 axis in neuroinflammatory response in depression, while investigating other possible inflammatory regulatory pathways is not comprehensive enough. Secondly, this study mainly observes the activation state of microglial cells and their associated inflammatory response, without delving into the role of astrocytes. Astrocytes are essential supportive cells in the central nervous system, and they may play a crucial role in the occurrence and development of neuroinflammation and depression. The lack of detailed investigation on astrocytes may result in an incomplete understanding of the overall neuroinflammatory regulatory mechanisms. In the future, we will further explore the role of Longya Lily and Fluoxetine in other neuroinflammatory regulatory pathways, especially in investigating the role of astrocytes in this process. Simultaneously, in-depth analysis of various biomarkers associated with depression and the effects of drugs on these biomarkers will provide a theoretical basis for individualized treatment and better outcomes for patients with depression. | Traditional Chinese medicine is one of the most commonly used complementary and alternative medicine therapies for depression. Integrated Chinese-western therapies have been extensively applied in numerous diseases due to their superior efficiency in individual treatment. We used the meta-analysis, network pharmacology, and bioinformatics studies to identify the putative role of Longya Lilium combined with Fluoxetine in depression. Depression-like behaviors were mimicked in mice after exposure to the chronic unpredictable mild stress (CUMS). The underlying potential mechanism of this combination therapy was further explored based on in vitro and in vivo experiments to analyze the expression of COX-2, PGE2, and IL-22, activation of microglial cells, and neuron viability and apoptosis in the hippocampus. The antidepressant effect was noted for the combination of Longya Lilium with Fluoxetine in mice compared to a single treatment. COX-2 was mainly expressed in hippocampal CA1 areas. Longya Lilium combined with Fluoxetine reduced the expression of COX-2 and thus alleviated depression-like behavior and neuroinflammation in mice. A decrease of COX-2 curtailed BV-2 microglial cell activation, inflammation, and neuron apoptosis by blunting the PGE2/IL-22 axis. Therefore, a combination of Longya Lilium with Fluoxetine inactivates the COX-2/PGE2/IL-22 axis, consequently relieving the neuroinflammatory response and the resultant depression.
Subject terms | Supplementary information
| Supplementary information
The online version contains supplementary material available at 10.1038/s41540-024-00329-5.
Acknowledgements
This study was supported by 2022 Baise City Scientific Research and Technology Development Plan (Bai Ke 20224141) and 2020 Guangxi University Young and Middle-aged Teachers' Basic Research Ability Improvement Project (2020KY13031).
Author contributions
H.N.M., H.H.H., and C.Y.L. wrote the paper and conceived and designed the experiments; S.S.L. and J.F.G. analyzed the data; C.R.L. and Y.W.L. collected and provided the sample for this study. All authors have read and approved the final submitted manuscript.
Data availability
The data supporting this study’s findings are available on request from the corresponding author.
Competing interests
The authors declare no competing interests.
Ethics statement
The current study was performed with the approval of the Ethics Committee of Youjiang Medical University for Nationalities and performed in strict accordance with the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health. | CC BY | no | 2024-01-15 23:41:55 | NPJ Syst Biol Appl. 2024 Jan 13; 10:5 | oa_package/ff/74/PMC10787738.tar.gz |
|
PMC10787740 | 38218971 | Introduction
Uranium may infiltrate the environment via uranium mining, production, and usage, posing risks to human health and the natural environment related to its toxicity and radioactivity 1 . To eliminate uranium from an aqueous solution, a multitude of treatment strategies, including physical, chemical, and biological procedures, have been used 2 . According to the principle of adsorption, nucleophilic material may chemically adsorb a variety of radioactive elements, and the process is enhanced by the material's porous structure and large specific surface area 3 .
Due to adsorption's cheap cost, high effectiveness, and abundance of adsorbents, it is often employed to remove radionuclides from wastewater 4 , 5 .
The surface and edges of GO are covered with a broad range of functional groups, including hydroxyl, epoxy, carboxyl, carbonyl, etc. 6 – 8 . GO has characteristics including excellent dispersion, hydrophilicity, and compatibility because of these oxygen-containing functional groups, which make it an appropriate support material to mix with other composites or chemical active groups 9 , 10 . An important part of the uranium recovery and removal was performed by the functional groups on GO, as did the insertion of organic groups on GO to enhance the number of binding sites and the cycloaddition process to activate dormant locations on GO 11 . The uranium adsorption capabilities of GO-based nanomaterials are highly dependent on experimental circumstances and the effective groups on GO.
Graphene nanoribbons (GONRs), a nanomaterial comprised of six carbon atoms organized as rings, with a theoretical surface area of 2,630 m 2 /g and good transport features 12 . GONRs are a viable contender for uranium separation due to their hydrophobicity and large surface area. GONRs may be used to segregate radioactive elements by loading distinct functional groups 13 , 14 .
Brown seaweeds contain the linear polysaccharide alginate, also known as alginic acid. Alginates containing monovalent ions (alkali metals and ammonium) are soluble 15 , which restricts their use in the removal of radionuclides and heavy metals from aqueous solutions. Insoluble hydrogels may be created by ion-exchanging soluble alginate with multivalent metal ions 16 . For example, uranium recovery from aqueous solutions using calcium alginate beads as an adsorbent for the removal of radionuclides and heavy metal ions 17 . In this research, the adsorption capacity and hydrophobicity were increased using graphene oxide, GONRs, and sodium alginate. The combination of graphene oxide (GO), graphene oxide nanoribbons (GONRs), and sodium alginate (SA) in this research offers several advantages for the adsorption of uranium from wastewater. Firstly, GO has a large surface area and oxygen-containing functional groups, providing numerous active sites for adsorption. GONRs, with their unique structure and larger surface area compared to GO, further enhance the adsorption capacity. Sodium alginate acts as a scaffold, forming a three-dimensional network structure that increases the available surface area for adsorption. By combining these materials, the nanocomposite aerogels exhibit a higher adsorption capacity than individual components alone. Secondly, the combination enables the creation of a material with balanced hydrophobic-hydrophilic properties. GO is hydrophilic, while GONRs and modified SA contribute to hydrophobic characteristics. This balanced nature is important for selectively adsorbing hydrophobic uranium species while remaining compatible with the aqueous environment. Lastly, the combination ensures structural stability and porosity of the aerogels. GO and GONRs provide stability due to their two-dimensional and nanoribbon structures, respectively. Sodium alginate acts as a binder, forming a three-dimensional network that enhances mechanical strength. The porous structure of the aerogels allows for a high surface area and efficient interaction between uranium ions and active sites. To remove uranium from simulated wastewater, the goal of this work was to synthesize GO/GONRs/SA aerogel via hydrothermal and lyophilization treatment. The form and surface properties of materials were investigated using a variety of techniques, and the aerogel's ability to adsorb aqueous solution uranium was investigated. | Materials and methods
Graphite Powder (99% Purity, Chemical Reagent from China) was utilized as a precursor to create graphene oxide (GO). Carbon nanotubes with multiple walls were used to create graphene oxide nanoribbons (GONRs) (MWCNTs, Cheap Tube Inc., USA). In the synthesis of GONRs and GO, potassium permanganate (KMnO 4 , BDH, England) functioned as an oxidant. Uranyl nitrate (UO 2 (NO 3 ) 2 .6H 2 O, solid 98–102% BDH, England) was used to create 1 g/L stock solutions of uranium. 2.109 g of Uranyl nitrate (UO 2 (NO 3 ) 2 .6H 2 O) were dissolved in 1 mL of concentrated HNO 3 -containing deionized water. Add deionized water to the solution in a 1-L standard flask until the mark is achieved. The manufacturer of sodium alginate (SA), is Sigma-Aldrich, Germany. All other reagents utilized in this experiment were of analytical purity, and deionized water was used to make all solutions.
Preparation of GO/GONRS/SA aerogels
With a volume ratio of 9:1, H 2 SO 4 and H 3 PO 4 were combined to create graphene oxide (GO) (180:20 mL). After 15 min of stirring, 1.5 g of powdered graphite was added. 9.0 g of KMnO 4 was then slowly added while the mixture was continually swirled. For 12 h, the mixture was regularly mixed. 200 mL of freezing deionized water and 4 ml of 30% hydrogen peroxide (H 2 O 2 ) were used to terminate the reaction after 12 h, and the solution was then colored bright yellow. For purification, the mixture was divided among numerous centrifuge tubes. Hydrochloric acid (HCl) 10% and deionized water were alternated with centrifugation at a speed of 5000 rpm for 15 min at each washing step to complete the cleaning. The finished item was then dried in an oven for 24 h at 80 °C 18 .
To produce GONRs, multi-walled carbon nanotubes were unzipped. A typical process included pre-oxidizing 1 g of MWCNTs with sulfuric acid (150 mL) at room temperature for six hours with stirring. The reaction mixture was then added with 500% weight of KMnO 4 and swirled for one hour at room temperature. For 30 min, and to 55 °C the reaction mixture was heated, to 70 °C, and the reaction temperature was increased, where it was stabilized for some time before being allowed to return to room temperature. Before being filtered by (0.5μm) a PTFE membrane, the mixture was added to 400 mL of ice that had been combined with 5 mL 10% v/V H 2 O 2 . Following dispersion in deionized water (120 mL), for 30 min the substance was sonicated. By using a PTFE membrane, the mixture was then filtered and the filtrate was dried at 60 °C for 24 h 19 – 21 .
The dried GO of 0.32 g (8 mg/mL) and GONRs of 0.32 g (8 mg/mL) were ultrasonically dispersed in 20 mL deionized water for 1 h before being combined with 20 mL of (1 g) SA solution (25 mg/mL). The uniform solution was then dropped into the CaCl 2 (2 wt%) solution to produce hydrogel beads, which were then cross-linked with Ca 2+ for 6 h 22 . Hydrogel beads formed of GO/GONRs/SA were five times rinsed with deionized water before being vacuum freeze-dried for 72 h to create aerogels 23 .
Characterization
The materials were studied using transition electron microscopy TEM, scanning electron microscopy SEM, Fourier transform infrared spectroscopy FT-IR, and X-ray diffraction XRD, to produce SEM images. The French-made TESCAN MIRA3 scanning electron microscope with a 15 kV electron beam was used to carry out the SEM experiments. The compounds' FT-IR spectra were captured in pressed KBr pellets (Aldrich, 99%, analytical reagent) at room temperature using a BRUKER TENSOR 35 spectrophotometer 65. At 25 °C, the crystal phases were verified using an X-ray diffractometer (Xpert, PANalytical Philips, Holland), with (Cu K radiation line of wavelength 1.54 A° in 2 range from 10° to 80°. Operating parameters for the CuK radiation source were 40 kV voltage, 30 mA current, and a 10°/min scanning speed. Nanocomposite TEM pictures were captured at 100 kV using a Phillips EM 208S microscope.
Experiments of adsorption
A specified quantity of GO/GONRs/SA aerogel was introduced to the uranium solution for the uranium adsorption studies. Centrifugation was then used to remove the adsorbent, and the supernatant was analyzed by Energy dispersion X-ray fluorescence ED-XRF (Rigaku, USA). Lee et al. 24 used a high-resolution ED-XRF to determine the concentration of uranium-contaminated soil in a similar study using this analytic method. Balaji Rao et al. 25 evaluated the ED-XRF method, which was developed and standardized for the routine assessment of uranium in various process stream solutions ranging from 0.1 to 400 g L −1 , from a Uranium Extraction Plant at the Nuclear Fuel Complex. Kumar et al. 26 investigated the possibility of using Rh K scattered peaks: Compton and Rayleigh, for minimizing matrix effect and determining uranium in different matrices using ED-XRF spectrometry. Using the following equation, get the removal rate (p) and adsorption capacity (q e ). where uranium volume V (mL), and M (mg) represent the weight of GO/GONRs/SA aerogel, C e (mg/L) and Co (mg/L) represent the equilibrium and initial adsorption concentrations of uranium in the solution, respectively. | Results and discussion
Characterization of GO/GONRS/SA aerogels
Figure 1 displays GO, GONRs, and GO/GONRs/SA aerogels FT-IR spectra. GO's FT-IR Fig. 1 a spectrum reveals distinctive peaks at 1619 cm −1 (graphitic C = C) and 1341 cm -1 (indicating the bending vibrations of a group –CH and –CH2) In addition, GO contains distinctive bands at 1720, 1619, and 1046 cm −1 , which correspond to the stretching vibrations of carbonyl C=C, carboxyl C=O and epoxy C–O, respectively. While the large peak between 3165 and 3403 cm −1 suggested –OH for the carboxylic group 27 , 28 . As seen in Fig. 1 b, FT-IR spectrum data for GONRs likewise revealed a variety of vibrational frequencies. GONRs exhibited distinctive peaks at 1532, 1560 cm –1 (graphitic C=C), 2941 cm –1 –CH alkane (stretching vibration), 1098, 1109 cm –1 to C–O (stretching vibration), and 1663 cm –1 to C=O (stretching vibration) 29 .
The possible interactions between SA chains and GO/GONRs nanosheets were examined using FT-IR. According to Fig. 1 c, the broad and powerful OH stretching vibration corresponds to the unique peak of GO at 3208 cm -1 . OH stretching vibration may be responsible for the large peak at 3208 cm −1 30 . Due to the strong hydrogen bonds between sodium alginate chains, a noticeable shoulder peak is seen.
Figure 2 displays X-ray diffraction (XRD) spectra of the GO, GONRs, and GO/GONRs/SA aerogels. The X-ray diffraction (XRD) spectra of GO are shown in Fig. 2 a; the diffraction peaks were visible at an angle of 12.22°, which corresponded to the peak that characterizes GO. The GO peak at 2 θ° = 12.22°, which was GO's reflection in the crystal plane (001). Another peak, which is part of the crystal plane, was seen at the 2 θ = 43°. (100) 21 . Figure 2 b displays the results of the XRD structural characterization. The (002) and (100) planes of the GONRs correspond to the two diffraction peaks at 25.0° and 43.0°, respectively 31 . The XRD patterns of the GO/GONRs/SA aerogel are displayed in Fig. 2 c. The diffraction peak for GO is at 2 θ° = 12.22°. The amorphous character of SA in general is shown by the large peak of SA at 2 θ° = 13.6°. The absence of a GO peak in the GO/GONRs/SA aerogel shows that the small addition of GO has no appreciable impact on the crystallinity. Additionally, Intermolecular interactions may enhance the homogeneity of the components and give high miscibility for the creation of the required aerogels 32 , 33 .
GO possesses a two-dimensional, sheet-like structure, as seen in Fig. 3 a. The SEM images amply demonstrate the many lamellar layers and distinct sheet borders that characterize the GO structure 34 . The films are folded in parts and stacked one on top of the other. It's also important to note that the borders of the GO sheets were thicker. This is a result of the oxygen-containing functional groups being linked mostly at the GO's boundaries. The SEM images of the GONR sheets are shown in Fig. 3 b and reveal a much rougher surface with a larger ribbon structure. Figure 3 c demonstrates that the GO/GONRs/SA aerogel had a lamellar and network structure, showing that the GO sheets were compatible with the (SA) and efficiently integrated.
Figure 4 illustrates how the TEM images clearly showed the formation of graphene oxide sheets in addition to graphene oxide ribbons, as well as how these materials were made using alginate and how a triple network was constructed. In addition to plainly seeing the aerogels' porous structure, it is also possible to view graphene oxide nanoribbons emerging from the nanocomposite's surface like antennas.
Adsorption of uranium(VI) on GO/GONRS/SA aerogels pH effect
The solution pH is one of the most important factors for the sorption of U(VI) because it influences the speciation of U(VI), surface binding sites, and surface charge. Figure 5 demonstrates the considerable reliance on the pH solution adsorption, the adsorption rises abruptly from pH 2.0 to pH 4.0 and slowly from pH 4.0 to pH 8.0, U(VI) is largely present in the solution as UO 2 2+ , and sorption is minimal because H + ions compete for the GO–aerogels binding sites. At pH 5.0–7.0, where UO 2+ , UO 2 (OH) + , (UO 2 ) 2 (OH) 2 2+ , and UO 3 (OH) 5+ predominate, electrostatic attraction causes the sorption to reach its maximum. Due to the decreased precipitation constant at pH ≥ 8.0, the amount of U(VI) adsorption on GO/GONRs/SA aerogels is especially attributable to the precipitation of UO 2 (OH) 2 (s). By examining this variable and analyzing the results of the studies, it was determined that a pH = 6 is optimal for this adsorption. | Results and discussion
Characterization of GO/GONRS/SA aerogels
Figure 1 displays GO, GONRs, and GO/GONRs/SA aerogels FT-IR spectra. GO's FT-IR Fig. 1 a spectrum reveals distinctive peaks at 1619 cm −1 (graphitic C = C) and 1341 cm -1 (indicating the bending vibrations of a group –CH and –CH2) In addition, GO contains distinctive bands at 1720, 1619, and 1046 cm −1 , which correspond to the stretching vibrations of carbonyl C=C, carboxyl C=O and epoxy C–O, respectively. While the large peak between 3165 and 3403 cm −1 suggested –OH for the carboxylic group 27 , 28 . As seen in Fig. 1 b, FT-IR spectrum data for GONRs likewise revealed a variety of vibrational frequencies. GONRs exhibited distinctive peaks at 1532, 1560 cm –1 (graphitic C=C), 2941 cm –1 –CH alkane (stretching vibration), 1098, 1109 cm –1 to C–O (stretching vibration), and 1663 cm –1 to C=O (stretching vibration) 29 .
The possible interactions between SA chains and GO/GONRs nanosheets were examined using FT-IR. According to Fig. 1 c, the broad and powerful OH stretching vibration corresponds to the unique peak of GO at 3208 cm -1 . OH stretching vibration may be responsible for the large peak at 3208 cm −1 30 . Due to the strong hydrogen bonds between sodium alginate chains, a noticeable shoulder peak is seen.
Figure 2 displays X-ray diffraction (XRD) spectra of the GO, GONRs, and GO/GONRs/SA aerogels. The X-ray diffraction (XRD) spectra of GO are shown in Fig. 2 a; the diffraction peaks were visible at an angle of 12.22°, which corresponded to the peak that characterizes GO. The GO peak at 2 θ° = 12.22°, which was GO's reflection in the crystal plane (001). Another peak, which is part of the crystal plane, was seen at the 2 θ = 43°. (100) 21 . Figure 2 b displays the results of the XRD structural characterization. The (002) and (100) planes of the GONRs correspond to the two diffraction peaks at 25.0° and 43.0°, respectively 31 . The XRD patterns of the GO/GONRs/SA aerogel are displayed in Fig. 2 c. The diffraction peak for GO is at 2 θ° = 12.22°. The amorphous character of SA in general is shown by the large peak of SA at 2 θ° = 13.6°. The absence of a GO peak in the GO/GONRs/SA aerogel shows that the small addition of GO has no appreciable impact on the crystallinity. Additionally, Intermolecular interactions may enhance the homogeneity of the components and give high miscibility for the creation of the required aerogels 32 , 33 .
GO possesses a two-dimensional, sheet-like structure, as seen in Fig. 3 a. The SEM images amply demonstrate the many lamellar layers and distinct sheet borders that characterize the GO structure 34 . The films are folded in parts and stacked one on top of the other. It's also important to note that the borders of the GO sheets were thicker. This is a result of the oxygen-containing functional groups being linked mostly at the GO's boundaries. The SEM images of the GONR sheets are shown in Fig. 3 b and reveal a much rougher surface with a larger ribbon structure. Figure 3 c demonstrates that the GO/GONRs/SA aerogel had a lamellar and network structure, showing that the GO sheets were compatible with the (SA) and efficiently integrated.
Figure 4 illustrates how the TEM images clearly showed the formation of graphene oxide sheets in addition to graphene oxide ribbons, as well as how these materials were made using alginate and how a triple network was constructed. In addition to plainly seeing the aerogels' porous structure, it is also possible to view graphene oxide nanoribbons emerging from the nanocomposite's surface like antennas.
Adsorption of uranium(VI) on GO/GONRS/SA aerogels pH effect
The solution pH is one of the most important factors for the sorption of U(VI) because it influences the speciation of U(VI), surface binding sites, and surface charge. Figure 5 demonstrates the considerable reliance on the pH solution adsorption, the adsorption rises abruptly from pH 2.0 to pH 4.0 and slowly from pH 4.0 to pH 8.0, U(VI) is largely present in the solution as UO 2 2+ , and sorption is minimal because H + ions compete for the GO–aerogels binding sites. At pH 5.0–7.0, where UO 2+ , UO 2 (OH) + , (UO 2 ) 2 (OH) 2 2+ , and UO 3 (OH) 5+ predominate, electrostatic attraction causes the sorption to reach its maximum. Due to the decreased precipitation constant at pH ≥ 8.0, the amount of U(VI) adsorption on GO/GONRs/SA aerogels is especially attributable to the precipitation of UO 2 (OH) 2 (s). By examining this variable and analyzing the results of the studies, it was determined that a pH = 6 is optimal for this adsorption. | Conclusions
Aerogel materials as adsorbents are very appealing because of their diverse chemical composition, high porosity, and a variety of pore sizes, including micro-, meso–, and macropores. These features lead to efficient and affordable processes, and frequently also to selectivity towards the desired adsorbates. In the current study, freeze-drying was used to create GO/GONRs/SA hybrid aerogels, which were then used to eliminate uranium(VI) from simulated wastewater solutions. Aerogels produced from GO/GONRs/SA have a porous structure 3D network and a high surface area. The sorption of U(VI) onto GO/GONRs/SA aerogels was affected by pH. The pseudo-second-order model could be utilized to describe the sorption kinetics, whereas the Langmuir isotherm could effectively define the sorption process. The maximum monolayer sorption capacity feasible is 929.16 mg g −1 . Adsorption technique was found to be effective, spontaneous, and endothermic. These investigations revealed that the synthesized GO/GONRs/SA aerogels might serve as an effective sorbent for effluents from the nuclear industry and other important water sources. | Waste-water pollution by radioactive elements such as uranium has emerged as a major issue that might seriously harm human health. Graphene oxide, graphene oxide nanoribbons, and sodium alginate nanocomposite aerogels (GO/GONRs/SA) were combined to create a novel nanocomposite using a modified Hummer's process and freeze-drying as an efficient adsorbent. Batch studies were conducted to determine the adsorption of uranium (VI) by aerogel. Aerogels composed of (GO/GONRs/SA) were used as an effective adsorbent for the removal of U (VI) from aqueous solution. Fourier transform infrared (FT-IR) spectroscopy, X-ray diffraction (XRD), scanning electron microscopy (SEM), and transmission electron microscopy (TEM) were used to describe the structure, morphologies, and characteristics of (GO/GONRs/SA) aerogels. The initial concentration of uranium (VI) and other environmental factors on U (VI) adsorption were investigated, period of contact, pH, and temperature. A pseudo-second-order kinetic model can be employed to characterize the kinetics of U (VI) adsorption onto aerogels. The Langmuir model could be applied to understand the adsorption isotherm, and the maximum adsorption capacity was 929.16 mg/g. The adsorption reaction is endothermic and occurs spontaneously.
Subject terms | Contact time effect and studies of adsorption kinetics
Contact time is another crucial factor that might reveal the adsorption kinetics. As demonstrated in Fig. 6 , the sorption capacity of U(VI) changes as a function of contact time. The quantity of U(VI) adsorbate rose quickly in comparison at the beginning of 1 h before gradually reaching equilibrium at 8 h. The U(VI) adsorption of GO-based nanomaterials may quickly approach equilibrium due to the huge surface area and abundant effective groups on the surface of GO-based nanoparticles, which significantly increase the rate of U(VI) adsorption.
To explore the mechanism of adsorption, two alternative kinetic models—pseudo-first and second-order models were used. The models may be expressed using the relevant Eqs. ( 3 ) and ( 4 ) below: where the amount of U(VI) sorbed at equilibrium time is q e (mg/g), and the amount at any given time is q t (mg/g) (h). k1 (h −1 ) and k2 are the abbreviations for the pseudo-first-order and pseudo-second-order sorption rate constants, respectively (g mg −1 h −1 ).
The essential kinetics parameters for the equations were calculated, and the values for k1 and k2 are shown in Table 1 . As can be observed, the pseudo-second-order was better suited to represent the adsorption process of U(VI) on GO/GONRs/SA aerogels since it had the greatest correlation coefficient (R 2 ) when compared to other kinetic models and the estimated q e, cal value was so closer to the experimental q e,exp .
The idea that the rate-determining step may be thought of as chemisorption served as the foundation for the pseudo-second-order model.
Graphene oxide (GO) has a large specific surface area due to its two-dimensional structure and the presence of oxygen-containing functional groups, such as hydroxyl, epoxy, carboxyl, and carbonyl groups. These functional groups provide binding sites for adsorption 35 . Graphene oxide nanoribbons (GONRs) further enhance the adsorption capacity due to their unique structure and larger surface area compared to GO. The theoretical surface area of GONRs of about 2630 m 2 /g 36 . Sodium alginate (SA), when combined with GO and GONRs, can form nanocomposite aerogels. SA acts as a scaffold, providing a three-dimensional network structure that increases the overall surface area available for adsorption. By combining GO, GONRs, and SA, the adsorption capacity is expected to be increased compared to using any individual component alone. The synergistic effect of these materials' properties, such as high surface area, presence of functional groups, and three-dimensional network structure, enhances the adsorption capacity for uranium removal 37 .
Effect of initial U(VI) concentration and isotherm studies
The equilibrium experiments were carried out at various initial U(VI) concentrations ranging from 50 to 350 mg/L to better understand the effects of the initial U(VI) concentrations. The amount of U(VI) adsorption on GO/GONRs/SA aerogels increased as the equilibrium concentration of U(VI) rose, as seen in Fig. 7 .
The equilibrium sorption isotherms were simulated using the Freundlich and Langmuir isotherm models. According to the Langmuir isotherm model, monolayer adsorption takes place on a homogenous surface without any interactions between the closer binding sites of the adsorbates 38 . It may be written as Eq. ( 5 ). where q e is the sorbed amount at equilibrium (mg/g), q max is the Langmuir monolayer sorption capacity, K L (L/ mg) is the equilibrium constant, and C e is the equilibrium concentration (mg/ L).
The Freundlich isotherm model explains the sorption of solutes from a liquid to a solid surface using an empirical relation. It assumes the participation of several sites with different sorption energies, and its linear form is given by the following Eq. ( 6 ): where Freundlich constants: K F [(mg/g) (L/mg) 1/n ] and n correspond to the sorbent's adsorption capacity and adsorption intensity, respectively. The relevant parameters were calculated using the slopes and intercepts of the plots of C e /q e vs C e and lnq e versus lnC e (Table 2 ).
Upon comparison of R 2 values, Langmuir isotherm was determined to be more appropriate than Freundlich isotherm for describing the features of U(VI) adsorption on GO/GONRs/SA aerogels. A monolayer mechanism was most likely involved in the adsorption of U(VI) on GO/GONRs/SA aerogels, according to the results. The results showed that the GO/SA composite had a considerable adsorption capacity for U(VI) of 929.16 mg/g −1 , which was higher than previous studies that used GO/SA composite beads, which had a capacity of 149.76 mg/L 22 , and L-Lysine-GO/SA composite, which had a capacity of 704.22 mg/g 37 .
Thermodynamic studies
To comprehend the change in energy and establish whether the process can be spontaneous or not, sorption thermodynamics is an important factor. Table 3 provides the ΔG, ΔS, and ΔH thermodynamic characteristics of the U(VI) adsorption in hybrid aerogel. Figure 8 depicts the thermodynamics diagram as well as the link between adsorption temperature and adsorption capacity. According to Table 3 , the adsorption process was spontaneous and endothermic and the reaction was promoted by high temperatures because of the positive values of ΔS°, ΔH° and negative value of ΔG°.
Use the following Eq. ( 7 ) to compute throughout the adsorption process the change in thermodynamic parameters.
K eq represents the distribution coefficient (mL g −1 ), whereas R represents the constant of gas (8314 J mol −1 K −1 ) and T represents the absolute temperature (K). ΔH is the enthalpy change (KJ mol −1 ) and ΔS is the entropy change (J mol −1 K −1 ). Equation ( 8 ) represents the change in Gibbs free energy (ΔG) values (KJ mol -1 ).
The values of enthalpy ΔH and entropy ΔS given in Table 3 were calculated using the intercept and slope of the plots of ln K eq vs T −1 (Fig. 8 ), whereas Eq. ( 7 ). Positive values of ΔH and ΔS indicate the endothermic character of the sorption process and the increasing randomness at the solid-solution interface during sorption, respectively. The ΔG value decreased by negativity values with increasing temperature, indicating that the adsorption process was spontaneous at all conditions examined. | Acknowledgements
The authors are thankful to the Deanship of Scientific Research at the University of Bisha for supporting this work through the Fast-Track Research Support Program.
Author contributions
A.A.J., D.H.H., K.H.L., S.A., A.K.J., G.M.S., M.M.A. conceived and designed the experiments. A.A.J., D.H.H., K.H.L., performed the experiments and data analysis, S.A., A.K.J., G.M.S., M.M.A., prepared and characterized samples, M.S.J., A.M.A., S. A., G.M.S. A.G.A. writing-review and editing the manuscript.
Data availability
All data generated or analyzed during this study are included in this published article.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:55 | Sci Rep. 2024 Jan 13; 14:1285 | oa_package/7f/46/PMC10787740.tar.gz |
PMC10787741 | 38218970 | Introduction
Globally, bladder cancer (BLCA) was estimated to account for 573,000 new cases and 213,000 deaths in 2020 [ 1 ]. Surgery in combination with adjuvant chemotherapy has improved the survival rates of patients with BLCA. However, the recurrence and aggressiveness of BLCA are major factors contributing to the poor prognosis [ 2 ]. Therefore, understanding the molecular regulatory network of BLCA is crucial for improving treatment options. Under normoxic conditions, cancer cells utilize glycolytic metabolism instead of oxidative phosphorylation to generate energy, which is referred to as aerobic glycolysis or the Warburg effect [ 3 ]. The Warburg effect is closely related to BLCA pathogenesis and aggressiveness [ 4 ].
c-Myc plays a vital role in regulating aerobic glycolysis. It directly activates the transcription of nearly all glycolytic genes by binding to the classical E-box sequence, including GLUT1 , HK2 , and LDHA [ 5 ]. Additionally, c-Myc can activate the transcription of splicing factors to enhance the expression of PKM2 and promote glycolysis [ 6 ].
Proteins are the fundamental units of life activities, and ubiquitination is the second most common type of protein posttranslational modification. Deubiquitinases (DUBs) remove ubiquitin from ubiquitinated substrates, counteracting E3 ubiquitin ligase-mediated modification. The balance between the activities of E3 ubiquitin ligases and DUBs determines the fate of substrate proteins. Increasing evidence suggests that aberrant ubiquitination is critical in tumor development [ 7 ]. For instance, the E3 ubiquitin ligases NEDD4-1 and WWP2 catalyze PTEN polyubiquitination, leading to PTEN degradation and promoting tumor growth [ 8 , 9 ]. Previously, our group discovered that the E3 ubiquitin ligase RNF126 affects BLCA progression by regulating PTEN stability [ 10 ]. In contrast to ubiquitin ligases, USP13 and OTUD3 stabilize PTEN by cleaving its polyubiquitin chains, inhibiting tumor progression [ 11 , 12 ].
c-Myc is a highly active transcription factor in most human tumors that regulates a variety of tumor phenotypes, including proliferation, invasion, cell survival, genomic instability, angiogenesis, metabolism and immune evasion [ 13 ]. Amplification of the MYC oncogene has been observed in BLCA, and its products have been found to promote BLCA tumorigenesis [ 14 – 16 ]. Despite its importance in tumorigenesis, c-Myc is highly unstable and is rapidly degraded through the ubiquitin‒proteasome pathway [ 17 ]. Several E3 ubiquitin ligases, such as SKP2, FBXW7, CHIP, and FBXO32, have been shown to ubiquitinate and degrade c-Myc, thus suppressing tumorigenesis [ 18 – 21 ]. Conversely, USP37 and USP29 promote tumorigenesis by stabilizing c-Myc through deubiquitination [ 22 , 23 ]. While c-Myc has been the focus of much research as a potential therapeutic target, its indirect targeting has gained attention due to its instability. One promising approach is to selectively inhibit deubiquitinases that stabilize c-Myc [ 24 ]. For example, P22077, a small molecule inhibitor of USP7, significantly inhibited neuroblastoma with MYCN amplification in a xenograft model [ 25 ]. Therefore, identifying more deubiquitinases of c-Myc can expand the possibilities for indirect inhibition of c-Myc to treat tumors. This study reveals that USP43 promotes aerobic glycolysis and metastasis in BLCA by stabilizing c-Myc, thus providing a novel target for indirect inhibition of c-Myc. | Materials and methods
Cell culture
T24 and 5637 cells were cultured in 1640 medium, UM-UC-3 cells were cultured in MEM, and 293 T cells were cultured in DMEM. The cells used in this study were obtained from the Chinese Academy of Sciences Cell Bank and were free of mycoplasma contamination. Cell lines were validated using the short tandem repeat (STR) method. All media were supplemented with 10% fetal bovine serum. All cells were cultured at 37 °C in an atmosphere of 5% CO 2 .
siRNAs and plasmids
The human DUBs siRNA library (G-104705) was purchased from GE Healthcare Dharmacon. Small interfering RNAs (siRNAs) targeting USP43 , MYC and FBXW7 were synthesized by our commission from GenePharma (Suzhou, China). The siRNA sequences were as follows:
5’-GGUGGUCCUUUGGAUCCAATT-3’ (siUSP43-1);
5’-CCAGUUACCCGCUGGACUUTT-3’ (siUSP43-2);
5’-CGUUGUCUUGUAAUCUCUAAA-3’ (siUSP43 3’ UTR );
5’-GCUUGUACCUGCAGGAUCUTT-3’ (siMYC-1);
5’-GGAAGAAAUCGAUGUUGUUTT-3’ (siMYC-2);
5’-GCAUAUGAUUUUAUGGUAATT-3’ (siFBXW7).
The Flag-USP43 plasmid was a gift from Professor Yongfeng Shang at Peking University. Flag-c-Myc and HA-c-Myc plasmids were gifts from Professor Guoliang Qing at Wuhan University. Molecular cloning was used for all other constructs, and DNA sequencing was performed to confirm their integrity.
RNA extraction and quantitative reverse transcription PCR (qRT-PCR)
RNA was extracted and reverse transcribed, and a qRT-PCR protocol was carried out as described previously [ 26 ]. Supplementary Table S1 lists the qRT-PCR primer sequences used in this study.
Western blot analyses
In general, cells were lysed with RIPA lysate containing a cocktail of protease inhibitors for 40 min on ice and then centrifuged at 14,000 g at 4 °C for 5 min. The supernatant was then removed, and 5× loading buffer was added and denatured at 100 °C for 10 min. Total proteins were separated by SDS-PAGE and then subjected to immunoblotting experiments with the corresponding antibodies as previously described [ 27 ]. Information about the primary antibodies used in the article is listed in Supplementary Table S2 .
Glucose consumption and lactate production measurement
In short, after 48 h of transfection, 1 million cells were collected and reseeded in 6-well plates. After 8 h of cell attachment, the cells were cultured in serum-free medium for another 24 h. Then, the supernatant was collected, and the glucose and lactate contents were measured by kits. A glucose assay kit (BioVision, #K606-100) was used to quantify glucose levels, and a lactate assay kit (BioVision, #K607-100) was used to determine lactate levels.
Wound healing assay
The wound healing assay involved seeding cells in 6-well plates. After the cells became confluent, they were scratched out with a pipette tip, and the floating cells were washed off with PBS. Immediately after, the wound was photographed. Following 24 h of culture in serum-free medium, the wound was recorded again.
Transwell migration assay
For the transwell migration assay, 40,000 cells mixed in 200 μL of serum-free medium were seeded in an upper chamber (Corning, USA). Then, 600 μL of complete medium was added to the bottom chambers. After 24 h of incubation, 30 min of fixation was followed by 1 h of staining with crystal violet for the migrated cells. After cleaning and drying, we photographed the cells under a microscope and counted them using ImageJ software.
Construction of stable USP43 knockdown cell lines
Packaged lentiviruses were purchased from GenePharma (Suzhou, China) with the following sequences: 5’-TTCTCCGAACGTGTCACGT-3’ (shNC) and 5’-GGTGGTCCTTTGGATCCAA-3’ (shUSP43). Stable cell lines were constructed and selected according to the GenePharma Recombinant Lentivirus Operation Manual. The successful construction of stable cell lines was confirmed by qRT-PCR and Western blot analysis.
Animal studies
We purchased 4-week-old male BALB/c nude mice from WQJX BioTechnology (Wuhan, China). After 1 week of adaptive feeding, the mice were randomly divided. To establish a nude mouse model of popliteal lymph node metastasis, 1 × 10 6 stable T24 cells (T24-shNC or T24-shUSP43) were resuspended in 50 μL of sterile PBS and injected into the right footpads of nude mice. The nude mice were observed every 3 days and sacrificed after 4 weeks. The popliteal lymph nodes were dissected, and the volume was measured. Subsequent pathological and immunohistochemical analyses were then performed after fixation with paraformaldehyde. LN volume (mm 3 ) = (length (mm)) × (width (mm)) 2 × 0.52. To establish a model of caudal vein-lung metastasis in nude mice, the nude mice were injected via the tail vein with 1 × 10 6 stably transformed T24 cells resuspended in 100 μL PBS. We observed the nude mice every three days, and in vivo imaging of small animals was performed after 6 weeks to determine the status of lung metastasis. For further investigation, mice were sacrificed, and the lungs were dissected. Two groups of mice ( n = 5 each) were maintained for survival analysis. The experiment ended when the subject died or survived 60 days. The investigator was blinded to the group allocation of the mice during the experiment. The sample size is described in the corresponding figure legend. No animals were excluded from the analysis.
Coimmunoprecipitation assay
In brief, after lysis of the cells to obtain the supernatant, the indicated antibody was added for incubation overnight with shaking at 4 °C. The next morning, 20 μL of washed Protein A/G magnetic beads were added to the antigen-antibody complex system for continued incubation for 2 h. After washing three times with IP washing buffer, 1× loading buffer was added to the complex system and denatured at 100 °C for 5 min to separate the coprecipitated complexes. Then, the samples were subjected to immunoblot analysis.
GST pull-down
The GST-tagged c-Myc protein and His-tagged USP43 protein were expressed and purified using the E. coli expression system. 2 μg of His-USP43 protein and 2 μg of GST or GST-c-Myc protein were mixed in IP binding buffer and incubated for 4 h with shaking at 4 °C. Subsequently, 30 μL of washed Glutathione Sepharose beads were added to continue the incubation for 2 h, and after three washes with IP washing buffer, the samples were resuspended in 1× loading buffer and denatured at 100 °C for 10 min for subsequent immunoblot analysis.
Chromatin immunoprecipitation (ChIP)
The manufacturer’s instructions were followed when performing ChIP. In short, following fixation of 1 × 10 7 cells with 1% formaldehyde for 10 min at 37 °C, 0.125 M glycine was added immediately to quench for 5 min at 37 °C before lysing in SDS lysis buffer. Afterwards, chromatin was fragmented by sonication and incubated overnight at 4 °C with anti-c-Myc antibody (Abcam, ab32072) or IgG (Proteintech, B900610) and Protein A/G magnetic beads. After washing the complexes with low-salt and high-salt solutions in turn, purified DNA was obtained for subsequent quantitative PCR (qPCR) analysis. The primer sequences for the USP43 promoter are shown in Supplementary Table S1 .
Dual-luciferase reporter assay
293 T cells were transfected with the indicated luciferase plasmids in 24-well plates, and luciferase activity was measured using the Dual-Luciferase ® Assay Kit (Promega, E1910). Firefly luciferase activities were normalized to Renilla luciferase control values.
In vivo deubiquitination assay
After transfection as indicated, cells were treated with 10 μM MG132 for 6 h before harvesting and then lysed on ice for 40 min with RIPA lysate containing a cocktail of protease inhibitors. The supernatant was obtained after centrifugation at 4 °C for 10 min and incubated overnight with the corresponding antibody. The next morning, the system was added to washed Protein A/G magnetic beads for subsequent binding. Finally, polyubiquitinated c-Myc was analyzed by SDS-PAGE gel and immunoblotting.
In vitro deubiquitination assay
HA-c-Myc and Myc-Ubiquitin plasmids were transfected into 293 T cells. 6 h before harvesting, 10 μM MG132 was added to treat cells. We lysed 293 T cells transfected with the GFP-USP43 plasmid and precipitated GFP-USP43 protein with an anti-GFP antibody. USP43 protein was obtained by nondenaturing elution. Polyubiquitinated c-Myc was incubated with or without the USP43 protein in deubiquitination buffer for 2 h at 37 °C. The buffer contained 50 mM Tris-HCl, 5 mM MgCl 2 , 2 mM DTT, and 2 mM ATP-Na 2 with proteasome inhibitors.
Statistical analysis
The statistical analysis was performed with GraphPad Prism version 9.0. Two-tailed Student’s t-test was used for comparisons between two groups. Statistical significance from three or more groups was calculated by one-way or two-way ANOVA with Tukey’s corrections. A log-rank test was used to estimate the significance of mouse survival. p < 0.05 was considered statistically significant. | Results
siRNA screening reveals USP43 as a key deubiquitinase for glycolysis and c-Myc transcriptional activity
To identify deubiquitinases (DUBs) that play roles in both glycolysis and c-Myc transcriptional activity, we performed further screening of seven DUBs (Supplementary Fig. S1A ) [ 23 ]. We constructed a plasmid with a 5× E-box sequence reflecting c-Myc transcriptional activity and found that knockdown of USP43 showed the strongest inhibitory effect on the 5× E-box luciferase reporter, based on a luciferase reporter assay (Fig. 1A and Supplementary Fig. S1B, C ). Only USP43 was found to be upregulated in BLCA among the seven DUBs screened, and patients with high USP43 expression had a worse prognosis than those with low expression (Fig. 1B, C and Supplementary Fig. S2A–H ). Immunohistochemical analysis of the BLCA tissue microarray showed that the protein level of USP43 increased with increasing pathological grade (Fig. 1D and Supplementary Fig. S2I ). GSEA enrichment analysis revealed that USP43 was positively correlated with the glycolysis pathway and MYC target pathway in the TCGA BLCA dataset (Fig. 1E, F ). The dual-luciferase reporter assay showed that USP43 could further enhance the activation of LDHA-luciferase by c-Myc (Fig. 1G ). We then measured the effect of USP43 on glycolysis in BLCA cells and found that both glucose consumption and lactate production were reduced after USP43 knockdown in T24 and 5637 cells (Fig. 1H, I and Supplementary Fig. S3A, B ). The mRNA levels of key glycolytic genes GLUT1 , HK2 , PKM2 , and LDHA were also decreased after USP43 knockdown (Fig. 1J and Supplementary Fig. S3C ), consistent with the GEPIA website showing that USP43 is positively correlated with GLUT1 , HK2 , PKM2 , and LDHA at the mRNA level (Supplementary Fig. S3D–G ). The protein levels of GLUT1 and LDHA also showed the same trend (Supplementary Fig. S3H, I ). Knockdown of MYC attenuated the enhancement of LDHA promoter activity by USP43 (Fig. 1K ). Moreover, in the absence of MYC , the mRNA levels of four glycolytic enzymes were not affected by USP43 depletion (Supplementary Fig. S3J ). These results demonstrate that USP43 positively regulates glycolysis in BLCA and c-Myc transcriptional activity.
USP43 regulates the metastasis of BLCA cells
Next, the effect of USP43 on BLCA cells migration was examined. Specifically, we evaluated USP43’s effect on T24, 5637, and UM-UC-3 migration ability using wound healing and transwell migration assays. Our findings showed that the knockdown of USP43 significantly inhibited the migration ability of these cell lines (Fig. 2A, B and Supplementary Fig. S4A–D ), while overexpression of USP43 promoted migration (Supplementary Fig. S4E–H ). Additionally, we explored the potential involvement of the epithelial-mesenchymal transition (EMT) pathway in tumor metastasis by assessing EMT-related markers using immunoblotting. The N-cadherin, Vimentin, Snail, and Slug protein levels were downregulated upon USP43 knockdown and upregulated upon USP43 overexpression (Fig. 2C, D ).
To further investigate the impact of USP43 on BLCA metastasis in vivo, we established stable USP43 knockdown cell lines using lentiviral transfection methods and confirmed successful knockdown through qRT-PCR and immunoblot analysis (Fig. 2E ). We then established popliteal lymph node metastasis and lung metastasis nude mouse models. Compared with those in mice inoculated with control T24 stable cell lines (LV-shNC cells), popliteal lymph node volume and metastatic lesions in popliteal lymph nodes were lower in nude mice inoculated with T24 stable USP43 knockdown cell lines (LV-shUSP43 cells) (Fig. 2F–I ). Additionally, in vivo imaging of the lung metastasis nude mouse model revealed that the fluorescence intensity in the lungs was downregulated in nude mice inoculated with LV-shUSP43 cells. This was consistent with the presentations of dissected lung tissues and pathological sections (Fig. 2J–M ). Notably, the prognosis of nude mice inoculated with LV-shUSP43 cells was better than that of mice inoculated with LV-shNC cells (Fig. 2N ). Overall, these findings suggest that USP43 promotes BLCA cell metastasis.
USP43 stabilizes c-Myc
We next investigated whether USP43 is a DUB of c-Myc. Our results demonstrated that USP43 knockdown and overexpression had no effect on MYC mRNA levels in T24 cells (Fig. 3A and Supplementary Fig. S5A ). Similar findings were obtained in 5637 cells (Supplementary Fig. S5B ). However, knockdown of USP43 decreased the c-Myc protein level in T24 and 5637 cells (Fig. 2C ), while overexpression of USP43 increased the c-Myc protein level in T24, 5637, and UM-UC-3 cells (Fig. 2D ). Additionally, c-Myc protein levels were increased by USP43 in a dose-dependent manner (Fig. 3B ). To determine whether USP43 affects the protein stability of c-Myc, a cycloheximide (CHX) chase assay was conducted in UM-UC-3 and 5637 cells transfected with USP43 siRNA or USP43 overexpression plasmids. As expected, the half-life of c-Myc was significantly downregulated upon USP43 knockdown and significantly upregulated upon USP43 overexpression (Fig. 3C–F and Supplementary Fig. S5C–F ). Furthermore, the effect of USP43 on c-Myc was blocked by the proteasome inhibitor MG132 (Fig. 3G, H ). Together, these results suggest that USP43 stabilizes c-Myc through the ubiquitin-proteasome pathway.
USP43 interacts with c-Myc
Next, we performed a coimmunoprecipitation assay to investigate the potential interaction between USP43 and c-Myc. Our results demonstrated a strong interaction between exogenously expressed Flag-USP43 and HA-c-Myc in 293 T cells (Fig. 4A, B ), while recombinant GST-c-Myc pulled down recombinant His-USP43 in vitro (Fig. 4C ). To further explore the interacting domains of USP43 and c-Myc, we generated a series of truncated mutants and performed a coimmunoprecipitation assay. As shown in Fig. 4D–H , USP43 interacted with the N-, N1-, N2-, and C1-terminus of c-Myc, while c-Myc interacted with both the N-terminus and the C-terminus of USP43. Moreover, endogenous USP43 interacted with c-Myc in T24 and 5637 cells (Fig. 4I and Supplementary Fig. S6A ). Immunofluorescence analysis revealed that c-Myc and USP43 were colocalized in the nucleus of UM-UC-3 cells (Fig. 4J ). These findings indicate the interaction between USP43 and c-Myc.
USP43 deubiquitinates c-Myc at K148 and K289
We next examined the impact of USP43 on c-Myc polyubiquitination levels. Our results demonstrated that c-Myc polyubiquitination was upregulated upon USP43 knockdown and downregulated upon USP43 overexpression (Fig. 5A–C ). To determine whether USP43 deubiquitinase activity is necessary for c-Myc regulation, we generated a USP43 deubiquitinase inactivating mutant by mutating the cysteine at position 110 to serine (C110S) (Fig. 5D ). We found that while the C110S mutant had a similar c-Myc binding capacity to wild-type USP43, it was less effective at reducing c-Myc polyubiquitination levels and increasing c-Myc protein content (Fig. 5 C, E , and Supplementary Fig. S6B ). A similar trend was also observed in the activity of c-Myc on LDHA-luciferase (Fig. S 6C ). In vitro deubiquitination experiments confirmed these findings (Fig. 5F, G ). Next, the type of polyubiquitin chain of c-Myc deubiquitinated by USP43 was examined, and our data indicated that USP43 catalyzed the deubiquitination of the K48 chain but not the K63 chain of c-Myc, which is associated with proteasomal pathway degradation [ 28 ] (Fig. 5H ). To identify the lysine residue of c-Myc catalyzed by USP43, we constructed a series of c-Myc point mutant plasmids with lysine mutated to arginine. In the first round of screening, we found four c-Myc mutants (K148R, K289R, K371R, K397R) that were not regulated by USP43 (Supplementary Fig. S6D ). In the second round of screening, we found only two c-Myc mutants (K148R and K289R) that were not regulated by USP43 (Supplementary Fig. S6E ). We then constructed a double mutant of c-Myc and confirmed that K148 and K289 are indeed essential for USP43 catalysis of c-Myc (Fig. 5I and Supplementary Fig. S6F ). Further deubiquitination assays showed that USP43 was able to deubiquitinate wild-type c-Myc but not the c-Myc mutants K148R, K289R, and K148/289R (Fig. 5J ). Coimmunoprecipitation assays indicated that several c-Myc mutants could interact with USP43, similar to wild-type c-Myc (Supplementary Fig. S6G ). These results suggest that USP43 deubiquitinates c-Myc at K148 and K289.
USP43 antagonizes c-Myc degradation by FBXW7
Previous reports have demonstrated that USP28 and USP38 stabilize c-Myc through the E3 ubiquitin ligase FBXW7 [ 29 , 30 ]. Therefore, we examined whether USP43 could stabilize c-Myc by a similar mechanism. Sequential phosphorylation at serine 62 and serine 58 of c-Myc is required for FBXW7-mediated degradation of c-Myc. When serine 62 or serine 58 was mutated to alanine, FBXW7 failed to degrade c-Myc [ 19 , 31 ]. As reported, FBXW7 was not capable of degrading c-Myc T58A, S62A, or T58/S62A mutants in comparison with wild-type c-Myc (Fig. 6A and Supplementary Fig. S7A ). However, USP43 was able to stabilize not only wild-type c-Myc but also all these mutants (Fig. 6B and Supplementary Fig. S7B ). Knockdown or overexpression of USP43 did not change the protein level of FBXW7 in T24 and 5637 cell lines (Supplementary Fig. S7C, D ). Furthermore, knockdown of USP43 in FBXW7 -depleted T24 and 5637 cells still resulted in a reduction in c-Myc protein (Fig. 6C ). Based on these results, it appears that USP43 stabilizes c-Myc independently of FBXW7.
We also investigated whether USP43 and FBXW7 could balance the protein levels of c-Myc. Our results showed that exogenous USP43 blocked the ubiquitination-mediated degradation of c-Myc by FBXW7 (Fig. 6D, E ). USP43 deubiquitinates c-Myc at K148 and K289, and the K148/K289R c-Myc mutant showed an upregulated protein level and a downregulated ubiquitination level compared with wild-type c-Myc (Fig. 5I, J and Fig. 6F, G ). Considering these findings, it was hypothesized that FBXW7 may catalyze c-Myc at K148 and K289. Indeed, FBXW7 was almost completely incapable of degrading c-Myc by ubiquitination when lysine 148 and 289 were mutated to arginine (K148R, K289R, K148/289R) but still retained the ability to bind to several c-Myc mutants (Fig. 6F, G and Supplementary Fig. S7E, F ). Interestingly, wild-type USP43 can still stabilize T58A, S62A, and T58/S62A c-Myc mutants, which cannot be degraded by FBXW7 (Fig. 6B and Supplementary Fig. S7B ), suggesting the existence of other E3 ubiquitin ligases that can catalyze c-Myc ubiquitination at K148 and K289.
Next, we examined the effect of SKP2, another E3 ubiquitin ligase of c-Myc, on K148R, K289R, and K148/289R c-Myc mutants. We found that these mutants impaired the SKP2-mediated degradation of c-Myc when compared with wild-type c-Myc, indicating that K148 and K289 are the less dominant sites for SKP2 to catalyze c-Myc ubiquitination (Supplementary Fig. S7F ).
In our investigation of the impact of the DUB dead mutant USP43 (C110S) on wild-type c-Myc and T58A, S62A, and T58/S62A c-Myc mutants, we made an intriguing discovery. Specifically, we found that the C110S mutant was able to increase the wild-type c-Myc protein level, but not the T58A, S62A, and T58/S62A c-Myc mutants (Fig. 6B and Supplementary Fig. S7G ). This suggests that when USP43 loses its deubiquitinase activity, it may rely on FBXW7 to regulate c-Myc. Since both USP43 and FBXW7 can bind to c-Myc, we hypothesized that they might compete for binding to c-Myc.
There has been evidence that FBXW7 binds to the Myc box I (MBI) of c-Myc in previous studies [ 31 ], and we wondered whether USP43 also had the ability to bind to the MBI of c-Myc. Our coimmunoprecipitation assay demonstrated that the MBI domain of c-Myc can indeed interact with USP43, and the interaction was weakened when the MBI domain was deficient in the c-Myc mutant (Fig. 6H and Supplementary Fig. S7H ). Thus, the MBI domain appears to be a key interaction domain for c-Myc and USP43. Given that both USP43 and FBXW7 bind to the MBI of c-Myc, it seems reasonable to assume that increased USP43 may impede FBXW7’s access to c-Myc. In support of this notion, we observed that increasing doses of USP43 decreased FBXW7 binding to c-Myc (Fig. 6I ). These results suggest that wild-type USP43 primarily stabilizes c-Myc by directly deubiquitinating c-Myc, while USP43 with a loss of catalytic activity still protects c-Myc to some extent by competitively binding to c-Myc with FBXW7. Functionally, USP43 and FBXW7 counteracted each other to influence the migration ability of 5637 cells (Fig. 6J, K ). Overall, our results indicate that USP43 antagonizes the FBXW7-mediated degradation of c-Myc and cooperates with FBXW7 to regulate the migration ability of BLCA cells.
USP43 is a direct target of c-Myc
c-Myc forms a heterodimer with MAX to activate the transcription of numerous genes by binding to the E-box sequence (CACGTG) of target genes, participating in various physiological and pathological processes [ 32 ]. The JASPAR database indicates a potential E-box sequence in the USP43 promoter region (Fig. 7A ). Furthermore, ChIP-seq data from the GTRD and hTFtarget databases suggest that USP43 is a potential target gene of c-Myc (Supplementary Fig. S8A ). The Cistrome Data Browser database also revealed a binding peak of c-Myc in the promoter region of USP43 (Supplementary Fig. S8B ). Therefore, we aimed to investigate whether USP43 is transcriptionally regulated by c-Myc. First, the USP43 promoter, which spans -2000 to -1 (the translation initial site is 0), was amplified and then cloned into the pGL4.10 vector. The dual-luciferase reporter assay revealed that exogenous c-Myc significantly increased the activity of the USP43 promoter. However, when we mutated the E-box of the USP43 promoter to CAGCTG, the effect of exogenous c-Myc on mutant USP43 promoter activity was greatly weakened (Fig. 7B, C ). To test whether c-Myc directly binds to the USP43 promoter, we performed chromatin immunoprecipitation (ChIP) and qPCR. The results showed that c-Myc could pull down the DNA fragment of the USP43 promoter region (Fig. 7D, E ). Moreover, in 293 T cells, exogenous expression of c-Myc increased the mRNA level of USP43 (Fig. 7F ). Consistently, knockdown of MYC resulted in downregulation of USP43 mRNA levels in 5637 and UM-UC-3 cells, and the protein levels of USP43 in 5637 cells showed the same trend (Fig. 7G and Supplementary Fig. S8C, D ). Collectively, these data suggest that c-Myc can directly bind to the promoter region of USP43 and activate USP43 transcription, confirming that c-Myc is a transcription factor for USP43.
USP43 regulates BLCA cell migration by targeting c-Myc
Because USP43 regulates c-Myc stability, we investigated whether the impact of USP43 on BLCA cell migration depended on c-Myc. Transwell migration and wound healing assays were conducted to assess the effects. We first knocked down USP43 in 5637 cells, followed by reconstitution with wild-type USP43 or a deubiquitinase inactivating mutant. The results showed that the introduction of wild-type USP43 significantly reversed the inhibition of migration caused by USP43 knockdown, while the deubiquitinase inactivating mutant only partially rescued the migration inhibition (Fig. 7H, I and Supplementary Fig. S9A ). Next, we explored whether c-Myc could restore the migration inhibition caused by USP43 knockdown. Our findings indicated that overexpression of c-Myc counteracted the reduced migration ability of 5637 and T24 cells resulting from USP43 knockdown (Fig. 7J, K and Supplementary Fig. S9B, C ). Therefore, we concluded that USP43 regulates BLCA cell migration primarily through c-Myc. | Discussion
The initiation and progression of cancer are characterized by distinct metabolic reprogramming, which maintains the high proliferation rate of cancer cells [ 33 ]. In BLCA, various metabolic pathways are altered and contribute to tumorigenesis. Notably, a shift toward aerobic glycolytic metabolism (known as the Warburg effect) is a hallmark of tumor cells, including those found in BLCA. In response to high lactate content and subsequent acidification due to a glycolytic metabolic shift, carcinogenesis is facilitated by invasiveness, acid-mediated matrix degradation and metastasis [ 4 ]. Thus, the Warburg effect is associated with BLCA progression and aggressiveness. Oncogenes and tumor suppressors directly mediate the metabolic reprogramming of cancer cells. c-Myc, L-Myc, and N-Myc, which are members of the MYC family of oncoproteins, regulate metabolic reprogramming in human cancers. Although c-Myc expression is tightly regulated in normal cells, deregulation of c-Myc occurs in up to 70% of human cancers due to multiple mechanisms, including the gain in genetic copy number (chromosomal amplification or translocation), activation of superenhancers, aberrant upstream signaling, and altered protein stability [ 13 ]. Similarly, amplification of the MYC oncogene has been reported in BLCA [ 14 , 15 ]. The c-Myc oncoprotein is a “hypertranscription factor” that regulates transcription of at least 15% of the entire genome and controls various tumor phenotypes, including tumor cell proliferation, invasion, cell survival, genomic instability, angiogenesis, metabolism and immune evasion [ 13 ]. c-Myc is a major regulator of aerobic glycolysis, as it binds directly to a classical E-box sequence to transcribe almost all glycolytic genes [ 34 ].
The role of glycolysis in tumor progression and the central role of c-Myc in glycolysis and cancer have led to an increased interest in selective targeted therapy for tumor glucose metabolism disorders and c-Myc deregulation. One such therapy is 2-deoxy-D-glucose (2DG), a glucose analog that competitively inhibits glucose uptake and accumulates intracellularly. It then noncompetitively inhibits hexokinase (HK) and competitively inhibits phosphoglucose-isomerase (PGI). As 2DG targets glucose metabolism in tumor cells, it leads to insufficient energy supply and shows significant antitumor effects [ 35 ]. However, its clinical efficacy is diminished due to the large amount of natural glucose present in the circulation. While targeted therapies for other metabolic enzymes of tumor glycolysis are still in their early stages, few molecular targets can enter clinical trials to achieve the desired efficacy [ 36 ]. Regarding c-Myc, numerous animal experiments have demonstrated that c-Myc inactivation can cause tumor regression [ 37 , 38 ]. Nonetheless, inhibiting c-Myc with small molecules is challenging since it lacks a specific active site similar to kinases. In addition, c-Myc is a transcription factor that localizes and functions in the nucleus, making it challenging to target with antibodies. Furthermore, since c-Myc is essential for normal growth and development, non-tumor-selective inhibition may result in severe toxicity to normal tissues. Thus, indirect approaches to inhibit c-Myc, such as targeting MYC transcription, translation, stability, the MYC/MAX complex, and synthetic lethality with MYC have been examined [ 39 , 40 ].
The stability of the c-Myc protein is tightly regulated by the ubiquitin‒proteasome system [ 41 ]. In our current study, we screened a siRNA library targeting deubiquitinating enzymes (DUBs) to identify DUBs that positively regulate glycolysis and c-Myc transcriptional activity. We found that USP43 is a crucial deubiquitinase that controls both glycolysis and c-Myc transcriptional activity in BLCA. In particular, USP43 removes the polyubiquitination chains of c-Myc, preventing its degradation through the ubiquitin-proteasome pathway. Stable c-Myc can function as a transcription factor to activate USP43 transcription. This partially explains the upregulation of USP43 in BLCA tissues, wherein amplified c-Myc increases USP43 mRNA levels. This study revealed a positive feedback loop between USP43 and c-Myc in BLCA. The dysregulation of this loop results in aberrant glycolysis and the accumulation of c-Myc protein, both of which contribute to the malignant behaviors of BLCA. These results suggest that USP43 is critically involved in regulating glycolysis and c-Myc activity in BLCA and therefore holds promise as a potential therapeutic target for disrupting the USP43/c-Myc circuit and regulating BLCA behavior (Fig. 8 ).
Several deubiquitinases for c-Myc have previously been identified, including USP28 and USP38, which function through FBXW7 to interact with c-Myc, protecting it from degradation [ 29 , 30 ]. Other deubiquitinases, such as USP37, USP36, USP13, USP22, USP16, USP29, and OTUB1, have also been reported to deubiquitinate c-Myc [ 22 , 23 , 42 – 46 ]. As a result of this study, we identified USP43 as a c-Myc deubiquitinase. We found that USP43 directly cleaves the polyubiquitin chains of c-Myc, protecting it from degradation. Additionally, the interaction between USP43 and c-Myc reduces the proximity of FBXW7 to c-Myc, indirectly stabilizing it. Although previous reports have suggested that USP43 functions through its deubiquitinase activity [ 47 – 49 ], we also found that loss of this activity can still stabilize c-Myc. These findings suggest that the two functions of USP43 operate in coordination to maintain c-Myc as an oncoprotein and promote tumor progression (Fig. 8 ). Similarly, EZH2, which is known to play an oncogenic role through its methyltransferase activity, has recently been found to stabilize N-Myc independent of this activity [ 50 ].
After conducting mutant screening, we discovered that USP43 cleaves the polyubiquitin chains of c-Myc at two specific sites, K148 and K289. Additionally, our experiments revealed that the K148R, K289R, and K148/K289R c-Myc mutants significantly decreased the ability of FBXW7 to catalyze c-Myc ubiquitination, further indicating that K148 and K289 are the main sites for c-Myc ubiquitination by FBXW7. However, the K148R, K289R, and K148/K289R c-Myc mutants only partially affected the degradation of c-Myc by SKP2, indicating that K148 and K289 are not the primary sites of c-Myc ubiquitination catalyzed by SKP2. Our results suggest that USP43 and FBXW7 play antagonistic roles in controlling the protein level of c-Myc. Specifically, they balance the ubiquitination level of c-Myc at K148 and K289, thereby determining the fate of c-Myc.
In conclusion, our study revealed the mechanisms through which USP43 regulates glycolysis and c-Myc transcriptional activity. The findings of this study indicated that USP43 is a potential target to develop targeted therapy for BLCA. However, the efficacy of drugs targeting USP43 will be dependent on their ability to inhibit the deubiquitinase activity of USP43 and degrade USP43 protein or disrupt USP43-cMyc interaction.
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article. | A hallmark of tumor cells, including bladder cancer (BLCA) cells, is metabolic reprogramming toward aerobic glycolysis (Warburg effect). The classical oncogene MYC, which is crucial in regulating glycolysis, is amplified and activated in BLCA. However, direct targeting of the c-Myc oncoprotein, which regulates glycolytic metabolism, presents great challenges and necessitates the discovery of a more clarified regulatory mechanism to develop selective targeted therapy. In this study, a siRNA library targeting deubiquitinases identified a candidate enzyme named USP43, which may regulate glycolytic metabolism and c-Myc transcriptional activity. Further investigation using functional assays and molecular studies revealed a USP43/c-Myc positive feedback loop that contributes to the progression of BLCA. Moreover, USP43 stabilizes c-Myc by deubiquitinating c-Myc at K148 and K289 primarily through deubiquitinase activity. Additionally, upregulation of USP43 protein in BLCA increased the chance of interaction with c-Myc and interfered with FBXW7 access and degradation of c-Myc. These findings suggest that USP43 is a potential therapeutic target for indirectly targeting glycolytic metabolism and the c-Myc oncoprotein consequently enhancing the efficacy of bladder cancer treatment.
Subject terms | Supplementary information
| Supplementary information
The online version contains supplementary material available at 10.1038/s41419-024-06446-7.
Acknowledgements
The excellent technical assistance of Ms. Mengxue Yu, Ms. Danni Shan, Ms. Cong Zou, Mr. Wen Chen and Mr. Zongning Zhou is gratefully acknowledged. We also gratefully thank the exceptional assistance in editing diagram by Dr. Yuruo Chen. This study was supported by the National Natural Science Foundation of China (82273065), the Non-profit Central Research Institute Fund of Chinese Academy of Medical Sciences (2020-PT320-004), the Fundamental Research Funds for the Central Universities (2042022dx0003) and the Research Fund of Zhongnan Hospital of Wuhan University (SWYBK00-03). The funders played no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Author contributions
M.L., L.J., Y.Z., Y.X., and X.W. designed the study and wrote the manuscript. M.L., J.Y., and L.J. performed the most experiments. M.L., J.Y., Y.W., W.J., R.Z., W.X., M.J., W.D., G.W., K.Q. and Y.Z. helped with data collection and assembly. M.L., J.Y., L.J., Y.Z., and Y.X. performed data analysis and interpretation. M.L., L.J., Y.X., and X.W. wrote the original draft, review, and editing with the help of all authors. All authors reviewed the manuscript.
Data availability
The publicly available data for differential gene expression analysis and survival analysis were obtained from the online website GEPIA ( http://gepia.cancer-pku.cn/index.html ). The publicly available TCGA-BLCA cohort data (the data included 408 tumors, and 19 normal samples) were obtained from the GDC Data Portal website ( https://portal.gdc.cancer.gov/ ). The publicly available target genes of c-Myc were obtained from MYC ChIP-seq data in GTRD ( http://gtrd.biouml.org/#! ) and hTFtarget databases ( http://bioinfo.life.hust.edu.cn/hTFtarget#!/ ). The publicly available data showing c-Myc binding peaks in the USP43 promoter region were obtained from the Cistrome Data Browser ( http://cistrome.org/db/#/ ). The remaining data are available within the article, Supplementary Information or Original Data file.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:55 | Cell Death Dis. 2024 Jan 13; 15(1):44 | oa_package/90/ad/PMC10787741.tar.gz |
|
PMC10787742 | 38218902 | Introduction
Advanced glycation end products (AGEs) are the final products of proteins or lipids that become glycated and oxidized 1 , 2 . They are primarily formed in environments of hyperglycemia and oxidative stress but also accumulate with normal aging. AGE levels are particularly increased in diabetes and other age-related inflammatory or metabolic diseases, and in chronic kidney disease due to decreased excretion 1 . Accumulation of AGEs in tissues, including the brain, may lead to modification of proteins and of the extracellular matrix and to activation of inflammatory pathways by binding to the receptor for AGEs 3 . It is suggested that AGEs contribute to cognitive impairment 4 , dementia 5 , 6 , cerebral atrophy 7 – 9 , and to Alzheimer-related pathology 10 – 12 . For instance, previous studies showed higher concentrations of AGEs in the brain, cerebrospinal fluid, and serum of patients with Alzheimer’s disease (AD) 13 – 17 . Thus, AGE accumulation may have a role in the mechanisms linking diabetes to dementia 6 , 18 . Moreover, interactions of AGEs with APOE ε4, the most important genetic risk factor for dementia at older age, have been suggested 19 , 20 .
Tissue accumulation of AGEs can be estimated non-invasively as skin autofluorescence with an AGE Reader 21 , which may reflect AGE accumulation in tissues with low turnover, such as the brain. Skin autofluorescence measurement is based on fluorescent properties of AGEs and has been demonstrated to correlate with levels of both fluorescent and non-fluorescent AGEs in biopsy-derived skin tissue 21 . Levels of skin autofluorescence are increased in persons with diabetes and predict cardiovascular disease and mortality 22 . In addition, previous studies have shown that AGE accumulation in the skin is associated with worse cognition and with dementia cross-sectionally 19 , 23 . So far, no longitudinal studies have been conducted to study the association between AGE accumulation and the risk of dementia. Few studies investigated the association of AGE accumulation with brain volumetrics, as a preclinical marker of dementia, or with markers of cerebral small vessel disease 7 – 9 .
In this study, we determined the association of skin AGEs with the risk of dementia and whether they are related to measures of brain atrophy and of cerebral small vessel disease. In addition, we aimed to assess whether certain subgroups of participants, such as APOE ε4 carriers or persons with type 2 diabetes, might drive such associations. | Methods and materials
Study design
This study is embedded within the Rotterdam Study, a prospective population-based cohort designed to study the occurrence and determinants of diseases in the older population, as described previously 24 . Briefly, in 1990 all inhabitants aged 55 years or over from the district Ommoord in Rotterdam, the Netherlands, were invited to participate. The initial cohort comprised 7983 participants (subcohort RS-I) and was extended in 2000 with 3011 participants (subcohort RS-II) who had become 55 years of age or moved into the study district. In 2006, the cohort was further extended (subcohort RS-III) with 3932 participants aged 45 years or over. In total, the Rotterdam Study comprises 14,926 participants. Brain MRI scanning was performed in the Rotterdam Study population from 2005 onwards. The Rotterdam Study has been approved by the medical ethics committee according to the Population Study Act Rotterdam Study, executed by the Ministry of Health, Welfare and Sports of the Netherlands. All participants gave written informed consent. All methods were performed in accordance with the relevant guidelines and regulations.
Study population
Skin autofluorescence was measured between 2013 and 2016 in 3009 participants from RS-I-6, RS-II-4, and RS-III-2. Participants with outlying skin autofluorescence levels (defined as mean ± 4 standard deviations (SDs); N = 8) were excluded. Of the remaining participants, 2929 were free of dementia (35 had dementia at the time of skin autofluorescence assessment, 44 had unknown dementia status) and were eligible for the current study. A subset of those participants also had a brain MRI scan which was acquired between 2013 and 2016 (N = 1504). For analyses with brain volumes, participants with cortical infarcts (N = 42) were excluded. Information on lacunes and microbleeds was available for 1476 participants. Hippocampus volume (sum of left and right) was available for 1104 participants.
Measurement of skin AGEs
During the visit to the research center, skin autofluorescence was measured using the AGE Reader device (DiagnOptics B.V., Groningen, The Netherlands) based on the fluorescent property of AGEs. Briefly, approximately 4 cm 2 of skin at the volar side of the dominant forearm was illuminated with an excitation light source from the AGE Reader with a peak wavelength of 370 nm. The device estimates skin AGEs based on the emission and reflection spectrum, which is converted through a software into numerical values reported in arbitrary units. Thus, an elevated skin autofluorescence score corresponds to a high tissue AGE level. This method has been validated against AGEs measured in skin biopsies from the same site where skin autofluorescence was measured 21 . Participants were asked not to use skin creams before the measurement. The mean of three consecutive measurements was used for analyses.
Assessment of dementia
Participants were screened for dementia at baseline and subsequent center visits with the Mini-Mental State Examination and the Geriatric Mental Schedule organic level. Those with a Mini-Mental State Examination score < 26 or Geriatric Mental Schedule score > 0 underwent further investigation and informant interview, including the Cambridge Examination for Mental Disorders of the Elderly. In addition, the entire cohort was continuously under surveillance for dementia through electronic linkage of the study database with medical records from general practitioners and the regional institute for outpatient mental health care. Available information on clinical neuroimaging was used when required for diagnosis of dementia subtype. The final diagnosis was established by a consensus panel led by a consultant neurologist, according to standard criteria for dementia (using Diagnostic and Statistical Manual of Mental Disorders III-revised and Alzheimer’s Disease (AD) (using National Institute of Neurological and Communicative Disorders and Stroke-Alzheimer Disease and Related Disorders Association).
Brain imaging
Brain MRI scanning was performed on a 1.5-Tesla MRI scanner (General Electric Healthcare, Milwaukee, USA) with an 8-channel head coil. Imaging acquisition included a high-resolution axial T1-weighted sequence, proton density-weighted sequence, a fluid attenuated inversion recovery sequence and a T2*-weighted gradient echo sequence were acquired. The scan protocol, sequence details and processing of MRI data in the Rotterdam study were previously described elsewhere 25 . Total intracranial and parenchymal volumes and volume of white matter hyperintensities were quantified via automated tissue segmentation. Quantification of brain volumetric measures was obtained by automated brain tissue segmentation based on a k-nearest neighbor algorithm. All segmentations were visually inspected and manually corrected when necessary. Total brain volume was defined as the sum of grey and white matter volume. Hippocampus volume was obtained by processing T1-weighted images with FreeSurfer (version 5.1) 26 . Visual evaluation of all scans was performed by trained raters to assess the presence of cortical infarcts, lacunes, and cerebral microbleeds.
For volumetric markers, we used total brain volume, grey matter volume, white matter volume, and hippocampal volume. Cerebral small vessel disease markers comprised white matter hyperintensity volume, presence of lacunes (yes/no), and presence of microbleeds (yes/no).
Assessment of covariates
During home interviews, participants provided information on educational level, smoking status, alcohol use and medication use (antidiabetic medication, antihypertensive, and lipid lowering medication) 24 . Educational level was categorized as primary, lower, intermediate, or higher education. Smoking status was classified into never, current, or former. Alcohol use was categorized as no use or any use. At the research center, height and weight were measured and the body mass index (kg/m 2 ) was computed. Blood pressure was measured in the sitting position on the right arm using a random-zero sphygmomanometer. Serum concentrations of glucose, total cholesterol, high-density lipoprotein cholesterol, triglycerides, and creatinine were measured in fasting blood samples during the previous center visit (2009–2013). Serum 25-hydroxyvitamin D levels were derived from earlier rounds (1997–2008) as it has not been measured afterwards. The estimated glomerular filtration rate (eGFR, mL/min/1.73 m 2 ) was calculated using the Chronic Kidney Disease Epidemiology Collaboration equation 27 . Chronic kidney disease was defined as an eGFR less than 60 mL/min/1.73 m 2 . APOE was genotyped by polymerase chain reaction in RS-I and by biallelic TaqMan assay in RS-II and RS-III 28 , 29 . Participants were categorized as carries of no, one, or two ε4 alleles according to the APOE genotype. Type 2 diabetes was defined as fasting blood glucose > 7.0 mmol/L, use of antidiabetic medications, interview data, or as having type 2 diabetes according to general practitioners’ records.
Statistical analyses
Baseline characteristics were described for the total study population, stratified into tertiles of skin autofluorescence, and for the participants with brain MRI available. Additionally, baseline characteristics are reported for age- and sex-balanced tertiles of skin autofluorescence, derived by regressing skin autofluorescence on age and sex and categorizing the residuals into tertiles. These balanced tertiles were solely used for the comparison of baseline characteristics and not for the further analyses. For descriptive purposes, we also created a scatterplot showing all individual measurements of skin autofluorescence by the age of participants. The associations of skin autofluorescence with the risk of dementia and AD were assessed using Cox proportional hazard models. In these models, skin autofluorescence was analyzed in two ways: per SD difference, and categorized into tertiles, with the lowest tertile as the reference. Follow-up started when skin autofluorescence was measured and ended at the date of dementia diagnosis, date of death, or end of the study period (January 1, 2020), whichever came first. We repeated the analyses after stratifying by APOE ε4 carrier status (carriers versus non-carriers), after stratifying by type 2 diabetes status, and after excluding participants with chronic kidney disease to investigate whether these subgroups drove the associations.
The associations between skin autofluorescence and brain imaging markers were determined using linear regression for continuous outcomes and logistic regression for dichotomous outcomes.
All analyses were adjusted for age, sex and subcohort (model 1). In model 2, we additionally adjusted for other potential confounders, namely fasting glucose levels, use of antidiabetic medication, educational level, APOE ε4 carrier status, smoking behavior, eGFR and 25-hydroxyvitamin D. In model 3, we also adjusted for potential confounders which—given the cross-sectional measurement of these variables with the measurement of skin autofluorescence—may actually be intermediates, namely systolic and diastolic blood pressure, total cholesterol level, high-density lipoprotein cholesterol level, triglyceride levels and the use of blood pressure and/or lipid-lowering medication. To evaluate whether age was sufficiently adjusted for, we tested whether additionally adjusting for age squared changed the results. Analyses with volumetric brain imaging measures were additionally adjusted for intracranial volume.
Missing data on covariates were imputed using fivefold multiple imputation (i.e., “multivariate imputation by chained equations” package in R statistical software version 3.6.3 [R Project for Statistical Computing]). Survival analyses were conducted using the “survival” package in R. Statistical testing was performed 2-sided with P < 0.05 considered significant. | Results
Among the 2922 included participants the mean age was 72.6 years (SD 9.4) and 57% were women (Table 1 ). Skin autofluorescence levels were normally distributed with values ranging from 1.1 to 4.4 arbitrary units and a mean of 2.4 (SD 0.5). Levels increased with age of the participants (Supplementary Fig. 1 ), although this link was only seen for participants who did not develop dementia during follow-up. Participants in the highest tertile of skin autofluorescence were indeed older than participants in the lowest tertile (mean age 75.6 versus 69.9 years, P < 0.001, Table 1 ) and they were less often women (46% versus 67%). In addition, they were more often current smokers, had higher fasting glucose levels, higher prevalence of type 2 diabetes, and lower kidney function, even after regressing out the effects of age and sex (Supplementary Table 1 ). Participants with MRI (N = 1504) on average were slightly younger (mean age 71.2 [SD 9.2]) than the overall study population (Table 1 ).
In total, 123 participants developed dementia during a median of 4.3 years of follow-up (interquartile range 3.3–5.3), of whom 98 had AD. Higher levels of skin autofluorescence per SD associated with an increased risk of dementia (hazard ratio (HR) 1.21 [95% confidence interval (CI) 1.01–1.46) and of AD (HR 1.19 [0.97–1.47]), adjusted for potential confounders (model 2). Participants in the highest skin autofluorescence tertile had a 1.4-fold higher risk of dementia and a 1.3-fold higher risk of AD, compared to participants in the lowest tertile (model 2 adjusted HRs 1.42 [95% CI 0.88–2.29] and 1.29 [0.76–2.19], respectively). Additional adjustment for other cardiovascular risk factors (model 3) did not change these results. Table 2 shows the associations per SD and by tertile of skin autofluorescence with the risk of dementia and of AD using the different models for adjustment.
The increased risks were somewhat more pronounced in APOE ε4 carriers (HR per SD higher 1.34 [0.98–1.82] for all-cause dementia; 1.44 [1.01–2.05] for AD, Fig. 1 and Supplementary Table 2 ) and in persons with type 2 diabetes (HR 1.35 [0.94–1.94] and 1.27 [0.83–1.95]). Note, though, that formal interaction terms were not significant for these stratifications. Exclusion of participants with CKD did not substantially change the effect sizes, nor did further adjustment for age squared.
Participants with higher skin autofluorescence also had smaller total brain volumes (adjusted difference in z-score per SD − 0.02 [− 0.04; 0.00]), non-significantly smaller grey matter volumes (− 0.03 [− 0.06; 0.00]) and smaller hippocampus volumes (− 0.05 [− 0.10; − 0.01]), but not white matter volumes (− 0.01 [− 0.05; 0.02], Fig. 2 , details in Supplementary Tables 3 and 4 ).
In addition, they tended to have higher white matter hyperintensity volumes (0.03 [− 0.02; 0.07]) and to more often have microbleeds and lacunes (odds ratios: 1.11 [0.97–1.27] and 1.25 [1.01–1.55]).
Associations of skin autofluorescence with brain MRI measures were mainly present in participants with type 2 diabetes and in APOE ε4 non-carriers, except for hippocampus volume, which was associated with skin autofluorescence in both APOE ε4 carriers and non-carriers, although not statistically significantly. Again, the results did not change after excluding participants with CKD or with additional adjustment for age squared (Supplementary Tables 3 and 4 ). | Discussion
We found that higher skin autofluorescence, reflecting long-term accumulation of AGEs, is associated with a higher risk of all-cause dementia and of AD, independently of age and several other potential confounders. These associations were more pronounced in APOE ε4 carriers and in participants with type 2 diabetes. Skin AGE levels were also associated with a smaller total brain volume, grey matter volume and hippocampus volume, and with presence of lacunes, and non-significantly with white matter hyperintensity volume and with presence of microbleeds.
Our results are in line with existing cross-sectional literature that found higher levels of AGEs in the skin and in the brain, plasma, serum or urine of persons with dementia or cognitive impairment 4 , 8 , 16 , 19 , 23 . The results of our study add to the current literature by linking AGEs to dementia in a longitudinal setting, thereby supporting the hypothesis that AGEs could contribute to the etiology of dementia.
Involvement of AGEs in dementia pathology was first described in 1994 and was based on co-localization of AGEs with senile plaques and neurofibrillary tangles in the brains of patients with AD 30 – 32 . More recent literature suggests that AGEs in the brain induce inflammation and oxidative stress, resulting in synaptic dysfunction and neuronal damage, and contribute to deposition and accumulation of dementia related pathologies both intracellularly (e.g. tau) and extracellularly (e.g. amyloid β) 5 , 6 , 10 , 11 , 14 , 33 , 34 . In that way, AGEs might underlie the increased risk of dementia among persons with diabetes 18 . These effects may result from direct toxic effects of AGEs and from interaction of AGEs, or other ligands, including amyloid β, with the receptor for AGEs (RAGE), subsequently triggering inflammatory pathways and, in turn, upregulation of RAGE 3 . Interestingly, RAGE also has a role in the transport of amyloid β into the brain across the blood–brain barrier 11 , 35 , 36 . Inhibition of RAGE has been proposed to decrease pathogenic events in AD. RAGE antagonists reduced amyloid β levels and improved learning and memory deficits in mouse models 37 – 40 , but, so far, results from a trial in patients with mild to moderate AD have been inconclusive 41 , 42 .
The finding that AGE levels associate with measures of brain atrophy, and particularly with lower grey matter volumes, is in agreement with previous smaller studies as well 8 , 9 , although associations with decreased hippocampus volumes and increased cerebral small vessel disease were not previously reported. Such brain changes might thus partially mediate an effect of AGEs on dementia, especially in persons with diabetes. However, further investigation is needed to assess the causality of these findings.
APOE ε4 genotype is the strongest genetic risk factor for AD, with the mechanisms likely related to its role in lipid metabolism 43 . An interaction between AGEs and apoE has been proposed, given colocalization in the brain and binding activity of apoE to AGE-modified proteins 20 . In this study, APOE ε4 genotype modified the associations of skin AGEs and dementia such that the associations were more pronounced among carriers. Contrastingly, AGE levels among carriers were not associated with most brain atrophy measures or with measures of cerebral small vessel disease. Other biological pathways linking AGEs and dementia in APOE ε4 carriers are, therefore, more plausible. For example, apoE4 has higher affinity for AGEs than apoE3 and this apoE-AGE interaction could contribute to plaque formation in AD 20 . Similarly, a synergistic effect of APOE ε4 and diabetes on the risk of AD was previously found, which, according to the authors, may be mediated by AGEs 44 .
Diabetes is an important risk factor for both AGE accumulation and dementia 45 , 46 . Our results suggest that AGEs relate to (preclinical) dementia, particularly among persons with type 2 diabetes and to a lesser extent among persons without type 2 diabetes. The latter may be partially explained by their lower absolute AGE levels 16 , or the presence of other compensative mechanisms, such as vessel health, that makes persons without diabetes less susceptible to the effects of AGE accumulation.
Strengths of our study include the large population in whom AGE levels, relevant other variables such as APOE ε4 and diabetes, and subsequent dementia incidence were assessed. AGE accumulation was measured in the skin, which is thought to reflect accumulation in other long-lived tissues 47 , such as the brain 48 , Some limitations also need to be discussed. First, due to the median follow-up of 4.3 years and in view of the long preclinical phase of dementia, inferences about the direction of the effects should be made with caution. Second, we only measured AGEs with fluorescent properties. Yet, these measurements were shown to correlate with levels of non-fluorescent AGEs as well and are thus considered a marker of the total skin AGE pool 21 . Third, our results were restricted to an elderly population of European ancestry and generalizability may thus be limited.
In conclusion, our findings suggest a role of AGE accumulation in the pathophysiology of dementia, which might contribute to the link between diabetes and dementia. Further research is warranted to determine whether reducing AGE accumulation, and relatedly, RAGE expression and activation, could be protective against dementia. Finally, future studies may explore whether APOE ε4 carriers are more susceptible to AGE related pathology and how APOE ε4 and AGEs might have a joint effect. | Conditions such as hyperglycemia and oxidative stress lead to the formation of advanced glycation end products (AGEs), which are harmful compounds that have been implicated in dementia. Within the Rotterdam Study, we measured skin AGEs as skin autofluorescence, reflecting long-term accumulation of AGEs, and determined their association with the risk of dementia and with brain magnetic resonance imaging (MRI) measures. Skin autofluorescence was measured between 2013 and 2016 in 2922 participants without dementia. Of these, 1504 also underwent brain MRI, on which measures of brain atrophy and cerebral small vessel disease were assessed. All participants were followed for the incidence of dementia until 2020. Of 2922 participants (mean age 72.6 years, 57% women), 123 developed dementia. Higher skin autofluorescence (per standard deviation) was associated with an increased risk of dementia (hazard ratio 1.21 [95% confidence interval 1.01–1.46]) and Alzheimer’s disease (1.19 [0.97–1.47]), independently of age and other studied potential confounders. Stronger effects were seen in apolipoprotein E ( APOE) ε4 carriers (1.34 [0.98–1.82]) and in participants with diabetes (1.35 [0.94–1.94]). Participants with higher skin autofluorescence levels also had smaller total brain volumes and smaller hippocampus volumes on MRI, and they had more often lacunes. These results suggest that AGEs may be involved in dementia pathophysiology.
Subject terms | Supplementary Information
| Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-024-51703-6.
Author contributions
S.S.M. designed the study, analyzed and interpreted the data and wrote the manuscript. T.L. contributed to the analysis and interpretation of the data and revised the manuscript. K.W., J.C., M.W.V., M.K.I., M.C.Z. and M.A.I. contributed to the interpretation of the data and revised the manuscript.
Funding
The Rotterdam Study is supported by Erasmus Medical Centre and Erasmus University, Rotterdam, Netherlands Organization for the Health Research and Development (ZonMw), the Research Institute for Diseases in the Elderly (RIDE), the Ministry of Education, Culture and Science, the Ministry for Health, Welfare and Sports, the European Commission (DG XII), and the Municipality of Rotterdam. This study was partly performed as part of the Netherlands Consortium of Dementia Cohorts (NCDC), which receives funding in the context of Deltaplan Dementie from ZonMW Memorabel (projectnr 73305095005) and Alzheimer Nederland. Further funding was obtained through the Stichting Erasmus Trustfonds, grant number 97030.2021.101.430/057/RB. Ms. Lu is supported by grant No. 201906170053 from the China Scholarship Council for PhD fellowship. The funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Data availability
Data can be obtained upon request. Requests should be directed towards the management team of the Rotterdam Study ([email protected]), which has a protocol for approving data requests. Because of restrictions based on privacy regulations and informed consent of the participants, data cannot be made freely available in a public repository.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:55 | Sci Rep. 2024 Jan 13; 14:1256 | oa_package/34/fc/PMC10787742.tar.gz |
|
PMC10787743 | 38218991 | Introduction
The vast majority of computer game users have not and will not experience gaming disorder 1 . The prevalence of gaming disorder (GD) varies 2 , but it can be assumed that in general populations it is approximately 3% 3 . The association between mere gaming time and GD can be called the dosage effect 4 , 5 . Although gaming time may be considered one of the most important variables with regard to gaming disorder, correlation coefficients vary between 0.17 and 0.4 1 , 6 – 9 . Empirical studies confirm the occurrence of the dosage effect, but the relationship is as consistent as it is only low, moderate at most 1 and can depend on additional factors 10 . Therefore, researchers often look for additional factors that may explain some of the variance. They relatively often focus on mediators, but only in a few cases they look at moderation dependencies 11 . Moderators should be understood as variables that explain under what conditions two variables (e.g., gaming time and gaming disorder) are related 12 . Therefore, their identification leads to the identification of the real risk factors for GD. This article aims to test hypotheses on the moderating role of two variables often treated as related to GD, namely depression and anxiety.
Generalized depression and generalized anxiety are related to GD. Some researchers treat them as comorbidities 13 ; others argue their more direct relationship with GD, most often as predictors 14 or consequences 15 . Results suggesting the opposite direction of the relationship between GD and mental health also exist, for example, anxiety and depression, among others, were shown to be outcomes of gaming disorder 10 . This may suggest that mental health should be treated as a predictor, consequence, or mediator in GD models. This trend is also reflected in review articles listing the most common comorbidities of GD (including depression and anxiety). Their results are not clear; for example, González-Bueso 16 reports that out of 15 studies with the measurement of depression, only in eight cases can the strength of the association be described as "large" (the criterion is a minimum of 14% of the explained variance or Cohen's d > 0.8), and in two cases no relationship has been found. Even less spectacular results were obtained from the analysis of the relationship between anxiety and GD. In this case, only two out of ten studies showed "large" effect sizes and one showed null effect. Taking into account the risk of publication bias identified by the authors, it is even more difficult to consider this direction of research conclusive. These results are corroborated by more recent studies; including a meta-analysis indicating that the correlations between GD and these two health disorders had an average small to medium effect size 17 and a systematic review showing that depression symptoms are present in 32% of people at risk of GD 18 . It seems that the relationship between gaming disorder and anxiety (depression) is unclear, which may be due to various factors 19 . However, one of the reasons may also be the wrong direction of search. Therefore, the question should be asked whether the role of these two variables may be different. It is impossible to develop GD without using video games just because a person has a mood disorder. Perhaps an approach that allows for both a moderating and a mediating role of depression and anxiety in the development of GD may be useful. A similar dual moderation-mediation role is presented in the interactional-transformational model 20 regarding the role of outcome expectancies in the development of substance abuse.
At this point, the question arises: Can the relationship between gaming time, GD, and depression or anxiety be of an interactional nature? Can anxiety or depression moderate (facilitate) the development of GD? The Person-Affect-Cognition-Execution Interaction (I-PACE) model 21 postulates that the interaction between dysfunctional personality traits, psychopathological characteristics (eg, depression, social anxiety), other general factors (eg, vulnerability to stress) and behavior-specific factors (eg, ‘a strong predilection towards gaming”; p. 254) influences addictive behaviors by influencing the gratification resulting from specific activities 22 . Assuming, for simplicity, that all remaining factors are constant, according to the I-PACE model, psychopathology-related variables (such as anxiety and depression) can increase the level of gratification from gaming 21 , which in turn results in an increased risk of GD 23 . It follows that conditions related to anxiety and depression can promote deficits in behavioral control and thus contribute to increases in gaming. This may be due, for example, to focusing attention on short-term rewards and risky decision-making 24 . In other words, depression or anxiety can increase the level of gratification, a mediator directly responsible for the development of GD. Gratification may be related to treating gaming as a coping method, which may result from the interaction of general predisposing factors and behavior-specific predisposing factors (i.e., motives, needs). Therefore, they may be more likely to develop GD while maintaining similar gaming intensity to that of their healthy counterparts. It should be noted that the I-PACE model differentiates between the early and late stages of the development of addictive behavior development 20 ; depending on the stage, the postulated role of anxiety and depression is different; At an early stage, the gratification mechanism described above occurs, and later mood disorders may result from compensatory mechanisms. This could explain the inconsistencies in the results obtained regarding the direction of the relationship between mood disorders and gaming disorder, which has been previously empirically demonstrated 25 . The first studies using the postulates of both models on GD showed that several factors moderate the relationship between gaming time and probability of GD. These factors were perceived urge and loneliness 4 , depression, ADHD, self-esteem, and gaming motivation 11 .
Based on this, we conducted a study aimed at verifying the moderating role of the two conditions most strongly associated with GD, namely depression and anxiety. We predicted that higher levels of both depression (hypothesis 1) and anxiety (hypothesis 2) would result in stronger associations between gaming time and GD among participants representing the general population of gamers, that is, participants who potentially may be at an early stage of development of a gaming addiction according to the I-PACE model and whose interests predispose them to gaming. | Methods
Participants
A survey was conducted among Polish gamers over 18 years of age who play video games on any device (computer, smartphone, console, tablet, or other). In total, 595 people participated. We assumed data elimination in two cases: when participants provided highly implausible information on their gaming habits (that is, more than 100 h per week or an average gaming session of more than 12 h) and in cases of completion of the study in less than 4 min. Fourteen responses were automatically excluded due to the criteria mentioned above and one response was eliminated due to homogeneous responses. Of the remaining 580 responses, 461 were complete and, therefore, could be used for analysis. The age range was 18–48; they completed the study (M age = 23.5, SD age = 5.1), including 200 (43%) women and 247 (54%) men; the rest of the respondents identified themselves as non-binary (8 people, 2%) or refused to answer the question (6 people, 1%).
Procedure
Participation was entirely voluntary and no financial compensation was offered to participants. Data were collected from April to June 2022 using Qualtrics ( www.qualtrics.com ). Participants were eligible to participate in the consent study after answering the screening questions, being 18 years or older, and playing video games. Respondents were not allowed to take the survey if they responded ‘no’ to any of these questions. The recruitment was carried out using the snowball method and Facebook groups that associate players of various genres of games, with a few groups not themed with games. This method allowed for the differentiation of results and the recruitment of players from many different genres. The recruitment of groups not related to games was motivated by the desire to collect data not only from the players involved in the community, but also from those who do not spend time on activities related to games outside of the game itself.
Measures
Background variables
Participants were asked about demographics such as gender, place of residence, age, level of education, and status of relationships.
Gaming behavior
Respondents answered questions about their gaming preferences. We asked about the dominant type of game in the last 12 months (online or offline), the genres of games they played in the last 12 months, and the preferred devices such as a computer or laptop, smartphone, console, and VR. There was also a self-diagnostic question about problem gaming (“I think I play games in a way that significantly impairs my functioning” with the answer options ‘yes’ and ‘no’). This question was intended to check the relationship between the perception of one's own problem and the occurrence of the real problem.
Gaming time
Respondents were asked about the average time they spend playing per week. The answer was given in hours. The average time spent gaming per week was 15.25 (SD = 12.2). The participants also answered the question of how long their average gaming session lasts in minutes. The results showed that the average gaming session lasts 143 min (SD = 92.32).
Gaming disorder risk
To measure the risk of gaming disorder among respondents, we use the Gaming Disorder Test 26 . A four-item Gaming Disorder Test was developed based on the diagnostic criteria for gaming disorder in the international classification of diseases ICD-11. The translation was done by one of the authors using the back-translation method by a translator unrelated to the work on the tool; the English version after back-translation has been approved by the author of the original. Cronbach’s alpha was α = 0.8. In our analysis, we used the sum of the answers for all items (varying between 4 and 20) as a continuous measure of the risk of GD. According to the authors of the tool, the occurrence of GD can also be determined binaryly in people who answered each of the four questions ‘Often’ (4) or ‘Very often’ (5).
Depression
We used the 9-item Patient Health Questionnaire (PHQ-9) 27 in the Polish version 28 to measure the level of depression. The questionnaire is intended to detect depression in the initial psychological diagnosis and is open-access. For ethical reasons, the last, ninth item, was excluded due to concern about the potential emotional trigger (question about self-harm and suicide). For this questionnaire, Cronbach’s alpha was α = 0.88.
Anxiety
Anxiety was assessed using the 7-item Generalized Anxiety Disorder Questionnaire 29 . The Polish version used in the study was created by MAPI Research Institute ( www.phqscreeners.com ). This scale is used to detect generalized anxiety disorder. Cronbach’s alpha for this scale was α = 0.89.
Statistical analysis
The risk of gaming disorder was a dependent variable, and the gaming time was an independent variable. Since depression and anxiety tend to correlate strongly 30 , we decided to test the assumption of collinearity. Furthermore, following the recommendations of Gregorich et al. 31 regarding the explanatory use of multiple regression, we decided to conduct regression analyzes in a model that takes into account both of these variables and in two separate models, treating each variable as an individual predictor/moderator. We verified the roles of both moderators separately and compared these models with the model that included both moderators. The difference in r 2 of the two corresponding models (with and without interaction) was derived for each potential moderator. All statistical analyzes were performed with SPSS 28.0; To verify the moderation hypotheses, the PROCESS macro for SPSS 32 was used. The statistically significant level was defined as two-sided p < 0.05. The information on moderation analysis will be separate from the information on hierarchical regression analysis in the Results.
Ethical approval and informed consent
This study was performed in line with the principles of the Declaration of Helsinki. Approval was granted by the Research Ethics Committee at the Institute of Applied Psychology of the Jagiellonian University. Opinion number 102/2021. Informed consent was obtained from all individual participants included in the study. | Results
Descriptive statistics
The number of participants who met the binary GD criteria in our sample was 3 (0.52%). Data on gender, place of residence, age, level of education, and relationship status are summarized in Table 1 .
The correlations between the variables analyzed can be found in Table 2 .
Predictors of the risk of gaming disorder
A linear regression was performed to understand the effect of the average weekly time spent gaming on the risk of GD taking into account the control variables (age and gender). Linearity was assessed using partial regression plots and a plot of studentized residuals against predicted values. There was homoscedasticity, as assessed by visual inspection of a plot of studentized residuals versus unstandardized predicted values. There was no evidence of multicollinearity, as assessed by tolerance values greater than 0.1. There were 3 cases with studentized deleted residuals greater than ± 3 standard deviations, 3 others with leverage values greater than 0.2, and none with values for the Cook distance greater than 1. A careful inspection of the cases revealed that there were no errors that warranted case deletion, but analogous analyzes were performed after excluding these six cases to see if they influenced the results. The results obtained were very similar to those reported on the complete database. The assumption of normality was met, as assessed by a Q-Q plot. There was independence of residuals, as assessed by a Durbin-Watson statistic of 1.968. The dosage effect has been found; Game time turned out to be a statistically significant predictor of a risk of GD (F (3453) = 5.227, p < 0.001) and represented 3.3% of the explained variability in the risk of GD. Inclusion of depression and anxiety improved the fit of the model ( F (5,453) = 12.275, p < 0.001. r 2 = 0.119). However, after including both predictors in one model, only depression remained a statistically significant predictor of the risk of GD (see the details in Table 3 ).
Interaction of gaming time and psychopathology
In the case of all moderation analyzes described below, the assumption of homoscedasticity was met, as assessed by visual inspection of a plot of studentized residuals versus unstandardized predicted values. The tests to see whether the collinearity data met the assumption indicated that multicollinearity was not a concern (gaming time, tolerance = 0.99, VIF = 1.01; Depression, tolerance = 0.32, VIF = 3.13; Anxiety, tolerance = 0.32, VIF = 3.13) despite the high correlation of both variables (Pearson’s r = 0.82, p < 0.01). There was independence of residuals, as assessed by a Durbin-Watson statistic of 2.019. The model that took into account age and gender as control variables that took into account both moderators at the same time was statistically significant and its goodness of fit was further improved ( F (7457) = 10.402, p < 0.001, r 2 = 0.139). However, it turned out that only the interaction between gaming time and anxiety (but not depression) was statistically significant (see the details in Table 3 , Model 3). Due to the fact that our study had explanatory rather than predictive objectives, in the following part of the text we decided to present in detail the results of both analyses taking into account depression and anxiety separately.
Interaction of gaming time and depression
According to Hypothesis 1, we analyzed the interaction between gaming time and depression symptoms (with control of age and gender). There was independence of residuals, as assessed by a Durbin-Watson statistic of 1.963. We found a statistically significant interaction ( b = 0.0033, SE = 0.0016 p = 0.04, increase r 2 = 0.008). Simple main effects indicated that in the case of depression at the mean level, the effect of the game time was significant ( b = 0.027, SE = 0.012, p = 0.02, 95% CI: 0.004, 0.050). High depression resulted in a stronger dosage effect ( b = 0.046, SE = 0.013, p = 0.0004, 95% CI: 0.021, 0.072). For low depression, the relationship was insignificant ( b = 0.008, SE = 0.017, p = 0.66, 95% CI: −0.026, 0.041). This dependency is shown in Fig. 1 , panel A. The Johnson-Neyman technique showed that a depression score of 6 and higher (calculated threshold: 5.87) resulted in a significant dosage effect. Two hundred and eleven participants scored above that threshold.
Interaction of gaming time and anxiety
According to Hypothesis 2, we analyze the interaction between gaming time and anxiety (with a control of age and gender). There was independence of residuals, as assessed by a Durbin-Watson statistic of 1.989. We found a statistically significant interaction (b = 0.0054, SE = 0.0018 p = 0.003, r 2 increase = 0.0054, SE = 0.0018 p = 0.003, r 2 increase = 0.018). Simple main effects indicated that in the case of anxiety at the mean level, the dose effect was significant ( b = 0.024, SE = 0.012, p = 0.04, 95% CI: 0.0009, 0.048). High anxiety resulted in a stronger dosage effect ( b = 0.053, SE = 0.013, p = 0.0001, 95% CI: 0.028, 0.079). For low anxiety, the relationship was insignificant ( b = -0.005, SE = 0.017, p = 0.78, 95% CI: −0.039, 0.029). This interaction is shown in Fig. 1 , panel B. The Johnson-Neyman technique showed that an anxiety score of 6 and greater (calculated threshold: 5.30) resulted in a significant dosage effect. One hundred and seventy-eight participants scored above that threshold.
The relationship between autodiagnosis and GDT results
Having answers to the single question whether gaming significantly impedes the participants' functioning, we decided to check whether such a simplified diagnosis would match with the GDT results. The comparison of participants who answered this question affirmatively (n = 28; MGDT = 12.21 ± 4.03) with those who denied this statement (n = 433; MGDT = 7.11 ± 2.70) using the Welch test (due to unequal variance) gave statistically significant results (95% CI, 3.522 to 6.685), t (28.59) = 6.604, p < 0.001, d = 1.83. None of the people who met the criteria of GD responded negatively to the self-diagnostic question, while 25 of those who were not considered problematic gamers according to the GDT criteria self-identified their gaming as significantly hindering their lives. | Discussion
The present study attempted to test the hypothesis that some mental health conditions, more specifically depression and anxiety, can play the role of not only a mediator, but also a moderator of the relationship between the dose of gaming and the risk of GD. This prediction was directly inspired by previous research 4 , 11 and the interactional-transformational model 20 , and was theoretically grounded in the I-PACE model of behavioral addictive disorders, which includes the interaction of personality traits, global and behavior-specific individual characteristics and psychopathological features (eg, anxiety and depression). The model postulates two stages in which psychopathological features can play different roles, but our intention was to explore the early stage in which anxiety and depression are expected to play a role in antecedent factors. Therefore, we collected a sample of active gamers without intentionally looking for people suffering from gaming disorders. Specifically, we pose two interactional hypotheses that both depression and anxiety would independently act as moderators of the relationship between weekly gaming time and the risk of GD. In addition, we tested a dosage effect and a model that took into account both depression and anxiety and their interactions with gaming time. Our results confirmed the existence and low effect size of the dosage effect. This result is in line with the existing literature, except that for our data, the percentage of explained variance (3.3%) was exceptionally low but still statistically significant. Subsequently, we decided to also add anxiety and depression to the model. This increased the fit of the model (12% of explained variance), while indicating a statistically insignificant role for anxiety in the prediction of the risk of GD in such a model. Next, we analyzed a model in which we allowed interactions between depression and anxiety with gaming time. The results were partly surprising, as this model further improved the model fit and confirmed our prediction that mental health variables act as moderators of the dosage effect. However, this was only the case for anxiety, not depression. So far, a similar relationship has only been demonstrated twice, in the studies of Yu et al. 4 and Koncz et al. 11 . However, in the case of Koncz’s research, a statistically significant interaction was found between the severity of depression and the gaming time, which our model did not confirm. The role of general anxiety as a moderator has never been tested before, but a specific form of anxiety (i.e., social anxiety) was shown to moderate this relationship 4 , 33 .
Finally, we tested our a priori hypotheses. We decided to take this step despite the fact that the previously presented model seemed to exclude the role of depression as a moderator of the dosage effect for two reasons. First, we verify our hypotheses, which we set primarily for explanatory purposes (understanding the potential moderating role of both variables) and not for strictly predictive purposes. The result obtained in the former step could be satisfactory if we were looking for a way to optimize the predictive capabilities of the GD risk identification model. It shows that when the interaction between anxiety and gaming time is included in the model, taking into account the data on the interaction of depression with gaming time does not significantly improve the predictive value. However, our goal was different: we wanted to explain whether these variables could act as moderators. Second, the results we obtained in the working model seemed to contradict the results of Koncz et al. 11 . However, they did not provide a conclusive answer. As Koncz did not collect information on anxiety, the hypothetical possibility that if such data were included in their model, the results would be similar to those obtained by us cannot be ruled out. Therefore, testing the model with depression alone seemed like a logical step to bring us closer to clarifying this apparent inconsistency.
The results obtained confirmed the hypotheses. Both anxiety and depression, treated separately, played the role of moderators of the dosage effect. First, these results confirm and extend the finding of Koncz's team, which previously demonstrated the moderating role of depression in a sample of children 11 . In our case, a similar result occurred in a sample of adult gamers. Second, they also broaden previous findings on social anxiety as a moderator 4 , 33 ; In our case, we were able to demonstrate a similar role for general anxiety. Third, such results fit the predictions of the I-PACE model, which includes psychopathological factors, such as mood disorders, among the factors that determine the risk of developing GD at an early stage. As can be seen, treating psychopathological factors as moderators can contribute to a better understanding of the mechanisms of development of GD (and probably other behavioral addictions), which in turn should contribute to its more effective prevention and treatment. At the same time, it should be emphasized that the results we obtained refer only to a fragment of the comprehensive I-PACE model. Therefore, our study should not be treated as an attempt to verify the entire model. Future research should attempt to test the postulates of the entire I-PACE model, unlike our study, which isolated only some of the variables. An additional effect of our study was the opportunity to explore to what extent the results obtained using GDT correspond to a simple self-diagnosis based on one question. We asked participants to indicate whether they agree with the statement ‘I think I play games in a way that significantly impairs my functioning’ with the answer options ‘yes’ and ‘no’, which can be treated as the most simplified way to identify people at risk of GD. Comparison of the average GD score between groups who directly stated that gaming had a negative impact on their functioning confirms these expectations: such people obtained average GDT scores almost twice as high as those who denied it, and this was also confirmed by a very high effect size. None of the respondents who met the recommended risk criteria according to the GDT (answer "4" or "5" to each of the four questions 26 ) answered negatively to a single self-diagnostic question. However, what may be interesting is the relatively large number of people who self-identify as having a gaming problem who were not classified as such based on their GDT score. There were 25 such people, that is, almost 5% of the sample, which is in line with the previous study 26 . This result certainly confirms that it is necessary to have validated and nuanced screening tools because the general opinions of the respondents may not reflect reality. On the other hand, in the future it would be necessary to identify the reason why the assessment of the impact of gamers who feel that their gaming is harmful is not always reflected in the GDT result. | Conclusions
In line with our hypotheses and the results of our predecessors, not only anxiety, but also depression, turned out to play the role of dosage effect moderators. Our findings were found to be consistent with the I-PACE model 21 . This result may prove to be very important in practice, as it seems to place the studied mental health conditions at the right place, not direct causes of the development of GD, but rather as genuine risk factors that can only contribute to GD when combined with a crucial trigger, gaming. | The relationship between gaming time and gaming disorder can be moderated by other variables. This study aimed to test the moderating role of mental health. Participants (N = 461) were recruited online. Gaming time was a statistically significant predictor of gaming disorder risk, with an explained variance of 3.3%. The goodness of fit of the model that took into account both moderators (anxiety and depression) improved to 13.9%. The interaction between gaming time and both moderators was significant. The results showed that depression and anxiety acted as moderators of the dosage effect, possibly by amplifying the gratification of playing games and thus contributing to the development of gaming disorder. It may be important in practise, as it seems to place the mental health at the right place, namely among risk factors that can contribute to gaming disorder in combination with a key trigger, which is gaming.
Subject terms | Limitations
The present findings are subject to some limitations. The present study used data from a snowball procedure, which could be improved using a representative population sample. When recruiting respondents, we did not inform them in any way that the purpose of the study was related to GD, we did not establish any conditions regarding the intensity of playing; the only prerequisite was that the candidate played video games. This may also be related to the small number of respondents who met the recommended criteria to be considered at risk of GD; In a comparable study 26 , the percentage was three times higher, although it should be noted that in the case of a sample of 500 respondents, the differences between the result of 0.5% and 1.8% come down to the difference of several people in absolute terms. However, this means that our results must be treated with caution. Furthermore, it should be noted that our research was carried out using self-reports. This may be particularly important for gaming time estimates, which were retrospectively assessed. For this reason, the results we obtain may differ from the actual gaming time. In the future, the focus should be on eliminating this risk by circumventing participants' uncertain testimonies; perhaps it would be a good idea to collect objective data from them, for example, by accessing their user profiles on gaming websites or by recording activity on their devices. Unlike other researchers, we did not collect data on gaming time divided into weekdays and weekends; such a division is now standard and should also be introduced in studies continuing our direction 34 . The cross-sectional design allows us to only explore associations and prevents any attribution of causality. We decided not to collect data from minors, which should certainly be included in possible replication. This may be of particular importance, as the symptom profiles of mental disorders can differ between adults and adolescents 35 . | Acknowledgements
This research has been supported by a grant from the Faculty of Management and Social Communication (AS), a grant from the Priority Research Area Digiworld under the Strategic Programme Excellence Initiative at the Jagiellonian University in Kraków (PS; U1U/P06/NO/02.34), and the National Science Center grant SONATA BIS (PS: 2021/42/E/HS6/00068).
Author contributions
P.S.: Conceptualization, Methodology, Formal analysis, Resources, Writing—Original Draft, Writing—Review & Editing, Visualization, Project administration, Funding Acquisition, A.S.: Writing—Review & Editing, Supervision, M.Ż.: Methodology, Investigation, Data Curation, Writing—Review & Editing. All of the authors had access to all data and take responsibility for the integrity of the data and the accuracy of the data analysis.
Data availability
The data sets used and analyzed in this study are available from the corresponding author upon reasonable request.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:55 | Sci Rep. 2024 Jan 13; 14:1257 | oa_package/ef/f1/PMC10787743.tar.gz |
PMC10787744 | 38218900 | Introduction
Sleep is a natural behavioral process involving reduced responses to external stimuli and changes in the activity of the cerebral cortex and muscle strength 1 . Every human spends about 27 years of his life sleeping, which alone expresses sleep’s importance 2 .
With aging comes reduced sleep quantity and quality, increasing the prevalence of insomnia 3 ; hormonal changes, especially in sex steroids such as estrogen, progesterone, and testosterone, have substantial effects on brain functions such as cognition and the sleep–wake cycle 4 . During menopause, with reduced ovarian hormones and increased pituitary gonadotropins, women experience irregular menstrual and sleep–wake cycles; the sleep duration is short, and its quality becomes undesirable 1 , 5 , 6 . A significant number of premenopausal women refer to this period as a challenging period for sleep, such that the prevalence of sleep disorders during the climacteric period is 39–47% 7 , 8 .
Several factors affect the sleep quality of women. One of these factors is the sexual performance and relationship between couples 9 . Marital satisfaction strengthens couples’ relationships, gives them a sense of pleasure, and improves self-confidence, interpersonal relationships, and physical, sexual, and psychological health 10 . This is while some studies suggest that the quality of marital relations and marital satisfaction reduces with age 11 . Of course, this reduction is different between women and men; female sexual desire falls more steeply over time, which can cause a reduction in couples’ marital satisfaction 12 .
One study on sexual health in Iran have reported many problems in sexual relationships between Iranian couples, which may be one of the reasons for the increase in divorce rates in recent years 13 . Increasing of divorce especially emotional divorce is due to marital burnout is caused by a mismatch between the facts and expectations of the couple, and its severity depends on the compatibility of the couple and their beliefs. Physical marital burnout are characterized by symptoms such as fatigue, lethargy, chronic headaches, abdominal pain and sleep disturbances 14 . Meanwhile, the role of sleep disorders in the occurrence of sexual problems in couples during the premenopausal period is significant. Since middle-aged people also share their sleeping environment with their partners, sleep conditions and quality can affect their marital relationship . Some studies indicate that better sleep quality, longer sleep time, and less variability in sleeping and waking time positively impact people’s life satisfaction 15 , 16 . In recent years, due to the modernization of society, the change in the structure of women’s life and employment, and the increase in working hours of couples outside the home, the upbringing of children, working at home and greater fatigue during rest periods, sleep–wake cycle changes and sleep disorders have become prevalent. This issue significantly affect the physical, social, mental, and even sexual health. On the other hand, premenopausal women who experience the climacteric period suffer a series of physical and mental physiological changes and even sleep–wake cycle changes. Considering sleep problems faced by middle-aged women and their possible impacts on physical and sexual health as marital burnout and marital unsatisfaction and also few studies indicated this issue in this age group, we aimed to determine the relationship between sleep quality and marital satisfaction in premenopausal working women. | Methods
This study is based on a cross-sectional study design and included all women working at Shiraz University of Medical Sciences, Shiraz, Iran, who were randomly selected through cluster sampling from January to April in 2021. In the absence of any similar prior research, while considering a 95% confidence level, an 80% test power, a moderat expected correlation coefficient of 0.3, assuming a design effect of 1.5, and a 20% dropout rate, the minimum required sample size has been estimated to be 150 individuals using G*Power (version 3.1) sample size calculation software.
The sample size was calculated based on the following formula: n represents the required sample size, and denotes the Z-value for the significance level and respectively. In addition , is the assumed sample correlation coefficient.
The sampling method employed in this study is a random cluster type. The study’s target population includes employed women working at Shiraz University of Medical Sciences in Iran who are in their premenopausal phase. Shiraz University of Medical Sciences is subdivided into seven clusters. A random selection process is utilized to pick a representative subset of clusters from the entire list. The number of clusters chosen is determined based on the desired sample size and the cluster size. From the sub-clusters, three specific faculties—Medicine, Nursing and Midwifery and Health and Pharmacy—were randomly selected. Within each selected cluster, an exhaustive list of employed women in their premenopausal phase was compiled. Random sampling was carried out within the chosen clusters, involving the selection of employed women in their premenopausal phase as participants for the study. This was achieved through simple random sampling. Data collection involved reaching out to and gathering data from the selected participants within each cluster through questinnare, with a specific focus on maternal satisfaction indicators and sleep quality. It was essential to maintain consistency and standardization in the data collection process across all clusters to ensure the study’s integrity. Following data collection, the information was subjected to analysis to draw conclusions and make inferences about the entire population. During this analysis, the clustering effect was taken into consideration to address potential biases.
The researchers visited these schools and selected those who met the study inclusion criteria by objective-based convenience. Eligible for inclusion were female employees of Shiraz University of Medical Sciences who were married, willing to participate, ≥ 40 years old, had not reached menopause, had not had a hysterectomy, did not have a severe marital problem, were not using drugs, alcohol, sleeping pills, or drugs that affect sleep quality and quantity (antidepressants, some appetite suppressants such as Liraglutide and some cardiac drugs such as propranolol, amiodarone, carvedilol, etc.), had no history of mental diseases and were not using psychiatric drugs, and had not experienced an uncomfortable or stressful event in the past six months, such as the death of a family member. The study exclusion criteria were not answering more than 20% of the items and wishing to withdraw from the study at any time.After receiving permission from the Institutional Ethics Committee (IR.SUMS.REC.1399.490), the researchers explained the study’s objectives to potential subjects and informed them that participation in this study was optional, the questionnaires were anonymous, and all the information recorded was confidential. Finally, those who wished to participate filled out an informed consent form before receiving the questionnaire. Then, the participants filled out the questionnaire during their work shifts.
A demographic information form, four-point Likert scale of “ Pittsburgh Sleep Quality Index” (PSQI), and the “ Evaluation and Nurturing Relationship Issues, Communication, and Happiness” (ENRICH) marital satisfaction scale were used for data collection. The PSQI was designed and psychometrically studied by Buysse et al. (1989) at the Pittsburgh Institute of Psychiatry with nine questions across seven dimensions: the subjective quality of sleep, delay in falling asleep, duration of useful sleep, adequacy of sleep, sleep disorders, the use of sleep-inducing drugs, and disruption in daily functioning. These items are scored on a four-point Likert scale between 0,1,2 and 3 indicated the normal, mild, moderate and severe condition respectively.The range of the total score is 0–21. Scores above six indicate undesirable sleep quality. Validity and realebility of PSQI in Iran estimated as Chronbach’s alpha coefficient was 0.55. KMO value was 0.58, and it was significant at 0.05 16 , 17 .
Olson, Furnier, and Druckman designed the ENRICH Marital Satisfaction Scale. Its original version has 125 questions and 12 subscales 18 . Suleimanian et al. have prepared its shortened form with 47 questions across nine dimensions 19 , scored on a five-point Likert scale from 1 (completely disagree) to 5 (completely agree). The ENRICH marital satisfaction scale, designed by David H. Olson, assesses marital satisfaction across nine dimensions: Personality issues, Marital relationship, Marital conflict, sexuality, financial management, leisure activities, children and parenting, Ideological orientation and family and friends 18 .
In the scale questions such as 1, 2, 3, 5, 7, 9, 10, 17, 25–29, 34–36 and 43 are scored based on Likert scale. But questions 4, 6, 8, 11–16, 18–24, 30–33, 37–42, and 45–47 are scored in reverse. The total score ranges between 47 and 235. Scores between 47 and 84 indicate high dissatisfaction, scores between 85 and 122 indicate relative dissatisfaction, 123–160 indicate moderate satisfaction, 161–198 indicate high satisfaction and 199–235 indicate very high satisfaction 19 , 20 .
Data were analyzed using descriptive (mean, standard deviation, and quantitative and qualitative description of variables) and inferential (multiple linear regression) statistics by SPSS version 22. In the evaluation of the appropriateness of a linear regression model, a range of diagnostic measures was employed. The R-squared value, a widely used metric for assessing goodness of fit, was elucidated to convey the proportion of variance explained by the independent variables. Furthermore, attention was directed towards the adjusted R-squared, which takes into account the number of predictors in the model. Subsequently, the analysis will extend to the F-statistic, a tool that examines the overall significance of the regression model. Multicollinearity was also addressed through the use of the Variance Inflation Factor (VIF), with calculated values consistently below the threshold of 10 . For all statistical analyses, a significance level of < 0.05 was considered.
Declaration of Helsinki
All methods were performed in accordance with the relevant guidelines and regulations. | Results
In Table 1 , demographic features of the sample are listed, and their correlation with sleep quality has been examined to identify the confounding variable. This study included 150 female participants with an average age of 51.44 ± 1.3 years. The average years of marriage were 17.21 ± 5.51. Most participants had a bachelor’s degree (67.8%), were employees (100%), had an employed husband (85.2%), and had two children (68.3%). In addition, most of their spouse’s education levels and monthly incomes were Ph.D. or above (44.3%) and more than 100 million IRR (48.6%), respectively.
The results of the quantitative evaluation of sleep quality score and marital satisfaction are shown in Table 2 .
The qualitative description of sleep disorder scores and marital satisfaction is presented in Table 3 . In the qualitative assessment of sleep disorder scores, it was observed that sleep disorders were distributed approximately equally between desirable and undesirable sleep. Additionally, the majority of women, 87 (58%), reported high marital satisfaction, while 32 (21.3%) reported very high marital satisfaction.
Multiple linear regression analysis was used to predict sleep quality by marital satisfaction in Table 4 . The R-squared value revealed that approximately 53% of the variance in the dependent variable was accounted for by the independent variables. Moreover, the adjusted R-squared offered a detailed evaluation of the model’s explanatory capability. The significance of the F-statistic (F(9,138) = 2.3, P = 0.019) affirmed the overall significance of the regression. Assessment of multicollinearity using the Variance Inflation Factor (VIF) indicates the absence of problematic correlations among predictors. In conclusion, the linear regression model demonstrated a satisfactory fit. As shown in Table 4 , among sub-dimensions of marital satisfaction, the two dimensions of personality issues (β = 0.327, P = 0.05) negatively and ideological orientation (β = 0.336, P = 0.013) positively predicted poor sleep quality scores. That is, the higher the understanding of personality between couples, the better the wife’s sleep quality, and the greater the conflict in ideological orientations, the worse the wife’s sleep quality. | Discussion
The present study revealed that the total marital satisfaction score does not predict the sleep quality score of premenopausal working women. In explaining this relationship, one can say daily occupational status and not safe physical activity of these women make it difficult to achieve high sleep quality. However, among the dimensions of marital satisfaction, personality issues negatively and conflict in the ideological orientation of couples positively predicted poor sleep quality, i.e., the more the understanding of personality between couples, the lower the poor sleep quality score. Consistent with the present results, Sassoon et al. showed a positive relationship between the neurotic personality of women during the premenopausal period and sleep disorders 21 . Brigitte et al. showed that compatible and agreeable people had better sleep time and quality 22 . Stephan et al. stated that extroverted people have better sleep quality 23 .
Our results indicated that conflicts in ideological orientations between couples predict decreased sleep quality. It seems these conflict lead to continuse tention between couple and so lack of peace and high sleep quality. In this regard, Hill et al. stated that adults with religious beliefs had healthier and better sleep quality outcomes than their less religious counterparts, and doubts about ideological orientations and less belief in religious issues had an inverse relationship with sleep quality 24 .
The present study found that most participants had undesirable sleep quality. Consistent with our findings, Cibelle et al. reported that women had a worse sleep quality during the climacteric period than during menstruation and experienced mild to moderate insomnia 25 . Jones et al., in their study on sleep problems of middle-aged women during the premenopausal period, showed that most women had relatively poor sleep quality 26 . Lampio et al. linked the premenopausal period with reduced total sleep time and efficiency, waking up after sleeping, and waking up every hour 27 . In contrast, Wenjun et al. found that the sleep quality of premenopausal women was better than post-menopausal women and induced menopause 28 . Jahangiri et al. also stated that most non-menopausal and menopausal women did not report any sleep disorders 29 . Such discrepancies can be attributed to differences in culture, economic status, number of children, years of marriage, underlying diseases, exercise and nutrition, marital problems, and other factors.
Our results indicated high marital satisfaction in most participants. Shareh et al. also found that the marital satisfaction of middle-aged women was high 30 , while Talaizadeh et al. reported that marital satisfaction was almost the same in different age groups 31 . Thus, based on the present results, it seems that factors other than age are effective on the marital satisfaction of women during this period, the investigation of which was not one of the objectives of this study. Finally, it can be said because of biological and individual differences these findings cannot be generalized to all adultery in the premenstrual period.
Limitations and strengths
One of the limitations of the study is not examining some factors that probably affect sleep quality such as body mass index, physical activity, and nutrition in these people, and examining the relationship of these variables with sleep quality and, if necessary, controlling their confounding effect. The present study’s strength was dealing with the sleep quality of working women and its relationship with marital satisfaction, which can help plan solutions to improve women’s sexual and mental health. | Conclusion
The study results showed that more than half of the working women during the climacteric period had undesirable sleep quality. At the same time, they reported high marital satisfaction scores. Although the marital satisfaction score could not predict the sleep quality of working women, some of its dimensions, namely personality issues and ideological orientations of couples, could predict the sleep quality. Therefore, it seems that life skills training, especially in these two dimensions, may improve the quality of sleep and, as a result, the physical and mental health of working women during the premenopausal period.
The protocol of the current study was approved by the ethics committee of the Shiraz University of Medical Sciences (No: IR.SUMS.REC.1399.490) and informed consent was received from each participant. | Sleep disorders can adversely affect physical, sexual, and marital health, particularly among middle-aged women. This study aimed to determine the relationship between sleep quality and marital satisfaction of working women during the premenopausal period. In this cross-sectional study, we selected 150 women working at Shiraz University of Medical Sciences in Iran was selected using random cluster sampling. A demographic information form, the Pittsburgh Sleep Quality Index (PSQI), and the Evaluation and Nurturing Relationship Issues, Communication, and Happiness (ENRICH) marital satisfaction scale were used for data collection. The Data were analyzed using SPSS.22 software at a significance level of P < 0.05. Multiple linear regression analysis was employed to predict sleep quality based on marital satisfaction. Our results showed that 79 (52.7%) of the participants had undesirable sleep quality, 87 (58%) had high marital satisfaction, and 32 (21.3%) had very high marital satisfaction. Regression analysis revealed that the total marital satisfaction score could not predict the sleep quality score of the participants. However, as dimensions of marital satisfaction, personality issues negatively (β = 0.327, P < 0.05) and ideological orientation positively (β = 0.336, P < 0.01) predicted the sleep quality score. Based on the prediction of the sleep quality score by personality issues and ideological orientations among the dimensions of marital satisfaction, it seems that life skills training, especially in these two dimensions, may improve the quality of sleep and, as a result, the physical and mental health of working women.
Subject terms | Abbreviations
Pittsburgh Sleep Quality Index
Evaluation and Nurturing Relationship Issues, Communication, and Happiness
Acknowledgements
At this moment, we would like to thank the Vice Chancellor for Research of Shiraz University of Medical Sciences, and the employee premenopause women in Shiraz University of Medical Sciences in for their non-stop support in the study.
Classification
Diagnostic study.
Author contributions
P.G.H. and P.Y. aided in the conceptualization, design, and critical revision of the final manuscript, P.Y., and S.M., aided in design, preparation of manuscript and critical revision of the final manuscript. P.Y., M.Z. aided in data analysis and critical revision of the final manuscript. All authors read and approved the final manuscript.
Data availability
All respectable readers and researchers can request the data by directly contacting the primary author at Ghaemmaghami [email protected].
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:55 | Sci Rep. 2024 Jan 13; 14:1248 | oa_package/11/67/PMC10787744.tar.gz |
|
PMC10787745 | 38218977 | Introduction
The total reflection X-ray fluorescence (TXRF) spectroscopy is a variant of conventional energy dispersive X-ray fluorescence and belongs to the instrumental methods of multi-elemental analysis. The source-sample-detector geometry used in TXRF method leads to significant decrease of the recorded background and simultaneous increase of the intensity of atomic fluorescence signals 1 – 4 . Numerous advantages of TXRF, including small amount of sample required for the measurement, wide range of examined concentrations, low detection limits, simple quantification based on internal standard method as well as cost-effectiveness cause that, for a few decades, the method has been successfully used in many fields of science and technique, among others in biomedicine, pharmacy, environmental sciences or mining and fuel industry 5 – 14 .
Better understanding of the physical processes underlying the generation of TXRF spectra, the use of modern technological solutions in the field of X-ray generation and detection and new/better methods of data recording and analysis as well as the improvement of practical aspects of measurements, made the method highly reliable, easy to use and user-friendly 1 , 3 , 15 – 20 .
Glioblastoma multiforme (GBM) is one of the most malignant human tumors overall 21 , 22 . As the early symptoms of GBM are not specific, it is usually diagnosed in its final stages. Even after introducing an appropriate treatment regimen, the prognosis is poor and the survival rate of patients is very low. According to WHO, the patients suffering from GBM live from 9 to 30 months from diagnosis 21 , 23 . The high malignancy of GBM, restricted treatment possibilities and poor prognosis push the medicine doctors and scientists to search for markers characteristic for early stages of disease development and more effective methods of its diagnosis and therapy 22 – 27 .
The mechanisms of cancerogenesis have been tested for decades. Despite this, the gathered knowledge allows only to some degree understand the pathogenesis of neoplasm diseases. The exact etiology of GBM is still not well known but among its risk factors the genetic predispositions (diseases that increase susceptibility to cancer, such as neurofibromatosis and Li-Fraumeni syndrome, as well as previous radiotherapeutic cycles), harmful environmental factors (long-term exposure to pesticides, smoking, working in oil refining or rubber production) and lifestyle (eating processed meat, exposure to formaldehyde) are taken into account 24 , 27 , 28 .
GBM can develop from healthy glial cells, as well as from an early-stage astrocytoma fibrillare. It is suspected that oligodendrocytes and stem cells may also transform into GBM. Almost all glioblastomas take the form of a tumor, which during its growth has the form of foci of necrosis surrounded by a layer of anaplastic cancer cells with hyperplastic blood vessels. Such complicated morphology causes that the complete surgical resection of tumor is very difficult and after the surgery GBM usually recurs 21 , 22 .
Determining the influence of the GBM development within brain on the whole organism has been still a great challenge. Stages and mechanisms of aggressive expansion of neoplastic cells into their surroundings, as well as genetic indicators of the cancerogenesis itself are known to medicine. However, besides the process of metastasis, the influence of GBM on other than brain vital organs is still a matter of speculation and the information on the elemental abnormalities occurring in the distant organs may shed some new light on this problem 29 – 33 . Therefore, the purposes of our study was determination of the changes in the concentrations of major, minor and trace elements which occur in kidney, heart, spleen and lung in rats subjected to implantation of human GBM cells to the brain. For the multi-elemental analysis of tissues the TXRF method was applied and the measurements of digested organ samples were done, independently, in 3 European laboratories. The data obtained there were compared to verify how not fully satisfactory values of validation parameters obtained during our previous inter-comparison research 34 , especially for light elements, may influence the results and subsequent conclusions from the real experiment. | Materials and methods
Experimental animals
The animals used in this study were male Wistar rats originating from the husbandry of the Department of Experimental Neuropathology of the Institute of Zoology and Biomedical Research, Jagiellonian University in Krakow. All animal-use procedures were approved by 2nd Local Institutional Animal Care and Use Committee in Krakow (agreement no. 119/2016) and were executed in consonance with relevant guidelines and regulations. All methods are reported in the paper in accordance with ARRIVE guidelines ( https://arriveguidelines.org ).
Glioma cells implantation
At 9 weeks of age, the rats were divided into 3 groups: N, T and U. Each group consisted of 5 individuals. N group included naive control animals, while the rats from groups T and U were subjected to the implantation of glioma cells into the brain. T98G and U87MG cell lines from ATCC were used for this purpose, respectively. The day before and on the day of surgery, the animals were weighed and cyclosporine was given them intravenously (Sandimmun 50 mg/mL, Novartis Poland) at a dose of 10 mg/kg of body mass for immunosuppressive purposes. Before implantation, the rats were anesthetized in an isoflurane-filled desiccator (Aerrane, Baxter Poland) and throughout the whole surgery the same substance was administered to them through inhalation. The implantation site was determined stereotaxically in the left hemisphere (coordinates antero-posterior: − 0.30 mm; medio-lateral: 3.0 mm; dorso-ventral: 5.0 mm, Paxinos and Watson 1986). Cell suspension in a volume of 5 μL was introduced to the brain through an intracranial hole drilled beforehand. The concentration of T98G and U87MG cells in the suspension was 50,000 and 5000 cells/μl, respectively. After glioma cells injection, the wound was sutured with a stapler and disinfected. After surgery, the animals were administered cyclosporine (Novartis Poland) at a daily dose of 5 mg/kg of body mass. Furthermore, for the first 7 days after glioma cells implantation, the rats were given an antibiotic (Sul-Tridin 24%, ScanVet, Poland). Experiment, from the day of implantation, lasted 21 days for groups N and T. In the case of the U group, due to the very poor animal health, the experiment was terminated 15 days after the surgery. Rats were sacrificed by intravenous Euthasol-Vet administration (Euthasol vet 400 mg/mL, Le Vet B.V.) at doses appropriate for their weight. From each rat the kidney, heart, spleen and lung were taken. The organs were weighed, frozen in liquid nitrogen and separately packed in sterile bags, which were stored in ultra-freezer until the mineralization procedure.
Sample preparation
Microwave-assisted acid digestion was performed using the Speed Wave 4 system (Berghof). Each organ was placed in a separate Teflon vessel (DAP100) together with the high purity 65% nitric acid (Suprapur, Merck), the volume of which was typically 2.5 ml per 1g of tissue. The sample masses varied from approximately 300 mg for spleens to 2 g for livers. For the purpose of quantitative elemental analysis via the internal standard method, 100 μl of the 1000 ppm gallium solution (Gallium ICP standard in HNO 3 2–3% 1000 mg/l Ga Certipur ® , Merck) was added to the entire volume of the digested sample. Afterwards, from such prepared solution, subsamples of 1 ml volume were taken to the separate vials and stored at a temperature of 5 °C until they were sent to the cooperating laboratories.
Apparatus and experimental conditions
The measurements of digested organ samples were carried out in three laboratories involved in the ENFORCE TXRF COST Action 18130. These were X-ray Fluorescence Laboratory of the Faculty of Physics and Applied Computer Science at the AGH University of Science and Technology (Laboratory 1, Krakow, Poland), Laboratory of X-ray Methods of the Center for Research and Analysis at the Jan Kochanowski University (Laboratory 2, Kielce, Poland) and TXRF Laboratory of the Interdepartmental Research Service at the Autonomous University of Madrid (Laboratory 3, Madrid, Spain). The TXRF spectrometers used for investigation as well as experimental conditions used in particular laboratories engaged in cooperation are listed in the Table 2 . The detailed description of preliminary and measurement procedures was presented elsewhere 34 .
Data analysis
The concentration of each element in the organ was calculated from the number of counts recorded for its peak in the sample spectrum, taking into account the concentration of the internal standard (Ga) in the sample, number of counts for Ga peak in the sample spectrum and the relative sensitivity of the TXRF system for the measured element. The details of calculations were presented elsewhere 34 .
For the evaluation of the statistical significance of the differences in elemental composition of examined organs between rats subjected to glioma cells implantation (T and U groups) and the control group non-parametric Mann–Whitney U test was utilized, and the appropriate calculations were done using Statistica software version 7.1. The statistical analysis was done independently for each laboratory participating in the study. | Results
As a result of the element analysis, for each of the examined 15 animals (5 animals × 3 study groups), the information on concentrations of P, S, K, Ca, Fe, Cu, Zn and Se in the 4 examined organs (kidney, heart, spleen and lung) was obtained. The content of elements was measured, independently, through three cooperating laboratories with the use of various apparatus and/or measurement conditions. Therefore, the statistical analysis of the obtained data, aiming at the evaluation of the significance of the differences between examined animal groups, was also done separately for them. In the Figs. 1 , 2 , 3 , 4 and 5 the dispersions of the element concentrations in the examined organs for the three analyzed rat groups, that were obtained by the three collaborating laboratories (L1, L2 and L3), were compared. The data obtained for rats subjected to the implantation of T98 and U87MG cells were marked in blue and violet, respectively. In turn, the concentrations recorded for the control group were signed in green.
In order to better visualize the differences in the element composition occurring between the experimental and control rats, as well as the possible discrepancies in this range observed between the cooperating laboratories, the data presented in the Figs. 1 , 2 , 3 , 4 and 5 are, additionally, summarized in the Table 1 . Statistically significant differences comparing to normal group of animals are signed there as double arrows. The concentrations higher and lower in comparison to the normal organs are differentiated by the arrow direction.
Based on the results presented in the Table 1 , the observed differences in elemental composition were classified as fully or partially compliant. When the same relation (or its lack) was found in the three cooperating laboratories, the result was treated as fully compliant. When the accordance was confirmed for two of three laboratories, the result was classified as partially compliant.
As one can see from the Figs. 1 , 2 , 3 , 4 and 5 and from the Table 1 , although in the three cooperating laboratories the samples were measured using various TXRF spectrometers and/or at different experimental conditions, the detected changes in elemental composition were mostly in a very good agreement. In case of elements with higher atomic numbers (Fe, Cu, Zn and Se), the 88% of the results were classified as fully compliant. The greatest number of discrepancies between the laboratories was found for P, S, K and Ca. The values of validation parameters including the limits of detection, intra- and inter-day precisions and trueness (Table S1 of the Appendix) determined for these elements point at the poorer accuracy and repeatability of the method in comparison with the heavier elements. This, in turn, influence the inter-laboratory precision of their analysis in the tissue samples. The mentioned problem was in details discussed in our earlier paper concerning the TXRF round-robin test for mammalian tissue samples 34 . However, it is necessary to mention that also in case of the elements with lower atomic numbers, more than 50% of the results were fully compliant and all the rest fulfilled the requirements of partial agreement.
In the further part of this paper we focused only on these element abnormalities (appearing after tumor cells implantation) that were observed in at least two of the cooperating laboratories, meaning that they have at least fulfilled the criterion of partial compliance. The final differences taken into consideration are mentioned in the rows F of the Table 1 .
For both experimental groups significant elemental anomalies were observed in all examined organs after GBM cells implantation to the brain. According to the Figs. 1 , 2 , 3 , 4 and 5 and Table 1 , the most of elemental changes for both animal models were noted in the spleen and lungs. With one exception (the level of Ca in the spleen), the concentrations of analyzed elements in the spleen, measured for rats subjected to glioma cells implantation, were lower than in the control group. The opposite relationship was found for the lungs.
As it was shown in our earlier work 35 and one can see in the Figures S1 - S3 from the Appendix, after the implantation of T98G cells in the rat brain, no neoplastic tumor had developed. Despite this fact, in this study the animals from group T showed a number of elemental anomalies in the examined organs. Moreover, for the heart, for example, their number significantly exceeded the number of abnormalities observed in group U, for which massive tumors after implantation of U87MG cells had appeared (Fig. S3 ). Depending on the examined organ, the direction of the recorded elemental changes was the same for both models (decreased content of most elements in the spleen, increased in the lungs) or opposite, as for example in the kidneys.
The lower, comparing to controls, P levels were recorded in kidneys (group U), hearts (group T) and spleens (both experimental groups). In turn, in the lungs of animals implanted with U87MG cells, the concentration of this element increased. The concentration of S was changing mainly for group U—it decreased in the kidneys and spleen and increased in the lungs. The last regularity was also noted for animals implanted with cells of T98G line.
In animals from group T, the abnormalities in the range of K concentration were found for all examined organs, with the level of the element being elevated in kidneys, hearts and lungs, and decreased in the spleen compared to the control group. In case of animals subjected to implantation of the U87MG cell line, analogous abnormalities were noted, but only for spleens and lungs. Also the level of Ca changed more often for animals representing the T group than U. The rats subjected to T98G cells implantation showed a reduced level of this element in the spleen and heart and an increased in the kidneys, whilst these from the second experimental group were characterized only by the higher content of Ca in the spleen.
Anomalies in Fe accumulation were found mostly in case of T group, for which the increased level of the element was observed in kidneys, hearts and lungs. For both experimental groups the level of Cu was elevated in lungs but diminished in spleens. In turn, in kidneys the observed abnormalities were different for the two groups of animals subjected to glioma cells implantation, namely Cu content increased for T and decreased for U group. Both Zn and Se were elevated in lungs of experimental rats. The first from the mentioned elements was, additionally, decreased in hearts for U group and spleen of T group. In turn, Se was diminished in kidneys and spleens of rats subjected to U87MG cells implantation and increased in hearts of those that obtained T98G cells. | Discussion and conclusions
The cancerogenesis process occurring in the brain may influence the response of the immune system also in other parts of the body and it may manifest itself with the changes in the elemental composition of tissues 36 – 38 . The elemental anomalies of various tissues and body organs associated with the occurrence of neoplastic processes are still at the stage of discovery and characterization 29 – 33 . Therefore, the aim of this investigation was identification of element abnormalities that appear in far distant organs as a result of implantation of human GBM cells into rat brain and/or subsequent tumor development. To achieve this goal, the kidneys, hearts, spleens and lungs were taken from animals subjected to implantation of T98G (described in literature as non-tumorigenic) and U87MG glioma cells (known as tumorigenic) and control rats 39 – 41 . The digested samples of organs were then measured in three European laboratories equipped with different commercially available TXRF spectrometers. The concentrations of elements, including P, S, K, Ca, Fe, Cu, Zn and Se, were determined using internal standard method and Ga was used for this purpose. To evaluate statistical significance of differences between experimental animals and normal rats, non-parametric Mann–Whitney U test was applied 42 independently in each laboratory participating in the investigation. The following discussion and conclusions, based on the literature data obtained for samples of human and animal origin, relate to the results agreed upon for the three laboratories participating in the comparative study.
Phosphorus, as building element of nucleic acids, phospholipids, phosphoproteins or ATP, is involved in numerous processes occurring in cells and tissues 43 – 45 . Hubersch et al . , Srivastava et al . and Planeta et al . shown that cancerous GBM tissue is characterized by diminished concentration of this element, which is probably related to the decrease of phospholipids (i.e. lecithin, sphingomyelin) level within tumor 35 , 46 , 47 . The present study showed decreased concentration of P in most of the examined organs. The exception from this rule was elevated level of the element found in the lungs of animals from the U group. Decreased content of P observed in various organs of experimental animals may be an effect of hypophosphatemia, which, in turn, could result from kidneys dysfunction 48 . Such a conclusion seems to be supported by large element imbalance observed for this organ, especially in animals subjected to T98G cells implantation.
Fibroblast growth factor 23 (FGF23) is a protein and member of the fibroblast growth factor family, which participates in the regulation of phosphate level in plasma and the metabolism of vitamin D. It decreases reabsorption of phosphates in kidneys, allowing their excretion with urine. As it was shown by Bollenbecker et al . , FGF23 may be elevated during the systemic inflammation 49 . What is more, the higher level of FGF23 in lungs may lead to the elevated phosphate concentration within the organ 50 , 51 and probably this phenomenon is responsible for significantly higher P level found for lungs taken from animals representing U group. It is necessary to mention that similar relation, but in this case not statistically relevant, was observed also for rats from T group.
Ca metabolism is closely related to the one of P. The linked homeostasis of both elements is crucial for proper functionality of neuromuscular system or mineralization process 52 – 56 . There is also an evidence for Ca playing an important role in cell signaling during the process of proliferation, and the Ca channels and Ca-regulated proteins show diverse and interconnected roles in the shaping of GBM biology and promotion of tumor growth 57 . The depletion of calcium ions level in the endoplasmic reticulum of glioma cells triggers their influx across the plasma membrane from intracellular space 57 – 60 . On a macro-scale, it causes imbalances in Ca level and the occurrence of the cell competition mechanism 57 . The elevated level of Ca in serum, called hypercalcemia, may occur in patients with neural tumors. However, it is described to be associated rather with astrocytic tumors than gliomas 57 – 60 . Our results showed relevantly decreased concentration of Ca in hearts of animals subjected to T98G cells implantation, and analogical trend for those implanted with U87MG cells. According to Shah et al . , the hypocalcemia may be caused by low Ca concentration in serum, which translates into the diminished level of the element in heart 61 . This in turn, may alter the flow of calcium through the voltage-gated cardiac calcium channels and lead to cardiac diseases 60 – 62 .
Sulphur is one of the most abundant mineral in the human body and its presence is related to the sulphur-containing amino acids—methionine, cysteine, cystine, homocysteine, homocystine, and taurine 63 , 64 . The occurrence and progress of GBM within brain is connected with the activity of γ-cystathionase and iron-sulphur centres of redox proteins 65 . M. Wróbel et al . observed the accumulation of sulphane sulphur in human gliomas and points to its importance for malignant cells proliferation and tumour growth. They linked this process with the diminished activity of γ-cystathionase. They showed also the correlation between the amounts of sulphane sulphur and the stage of the malignancy. High level of sulphane sulphur and a high GSH/GSSG ratio could result in elevated levels of hydrogen sulphide, that is often connected with the increase in malignancy of tumors 65 .
Iron-sulphur centers of redox proteins, consisting of Fe–S clusters, mediate electron transfer in the mitochondrial respiratory complexes. They are vital for the production and consumption of the energy in cells and are involved in the generation of reactive oxygen species (ROS) 66 , 67 . The organ affected by the cancerous process is experiencing the so-called Warburg effect—even in the presence of oxygen and properly functioning mitochondria, the glucose uptake radically increases in cells and lactate is produced 66 – 69 . The neoplasm present in the body states a great challenge and load for the immune system. Its response to any infection or an abnormality appearing in an organism is connected with the cytokines release. Their role is to alarm and affect the growth, proliferation and stimulation of cells involved in the immune response and also haemopoietic cells 68 . The cytokines activity, great energy requirements and altered energy metabolism in the neoplastic regions cause that the rest of the organism works at a minimum level necessary to maintain only the main of its functions. This reduced requirement and use of energy by remaining organs may be the reason of the diminished sulphur level observed in them 68 , 70 . The increased sulphur concentration in lungs of experimental rats may be, in turn, assigned to the pulmonary surfactant lining the alveoli. Han et al . pointed at its importance to the prevention of the dissemination and elimination of pathogens as well as modulating of the immune responses 71 . The overproduction of surfactant occurred in animals subjected to glioma cells implantation, probably, in the response to the ongoing inflammation in their respiratory system 71 – 73 .
Potassium is one of the most important electrolytes in the human body, as it is involved in maintaining the integrity of the skeleton, regulation of muscle contraction, blood pressure and nerve transmission. It is essential for the proper activity of all cells 74 , 75 . Hyperkalemia is a common disorder in patients suffering from cancers. The source of modifications of potassium channels and transmembrane proteins that mediate potassium within cells, may be different for various patients and may include renal failure, decreased potassium secretion and enhanced chloride reabsorption 76 – 79 . Low potassium in the spleen detected in our study may be related to the anemia caused by tumors progression. Despite the appropriate level of potassium in erythrocytes, the lower number of red blood cells undergoing lysis in the spleen may translate into the reduced level of the element within the organ 74 , 78 , 79 .
Iron is an essential metal in the human body. It is present in hemoglobin, myoglobin and various enzymes. The same, it takes part in oxygen transport, its storage in muscles, as well as energy production and regulation of diverse cell functions, including proliferation 80 – 82 . As mentioned above, it is also a building element of Fe-S centers of redox proteins mediating electron transfer in the mitochondrial respiratory complexes 66 , 67 , 82 . The high iron content detected in kidneys, hearts and lungs of animals representing T group may be associated with the altered iron metabolism and increased blood ferritin level connected with the process of tumorigenesis 83 – 85 . The mentioned above increased iron level was not observed in organs taken from animals subjected to the implantation of the most invasive glioma cell line U87MG. In rats from U group, the development of massive tumor within the brain was observed and their very poor health conditions made it necessary to shorten the experimental time. These animals presented the lowered iron content in the spleen, probably related to the anemia which is frequently observed in patients suffering from GBM. The anemia in GBM patients may be associated both with a reduced number of reticulocytes and a lowered iron-binding capacity. Therefore, the diminished concentration of iron in the spleen of rats from U group, was probably an effect of the reduced number of low-iron erythrocytes hemolysis in this organ 84 .
In human body, copper is required for the proper action of enzymes involved in aerobic metabolism, such as cytochrome c oxidase in the mitochondria, dopamine monooxygenase in the brain, lysyl oxidase in connective tissue and ceruloplasmin 86 . Together with iron, this element takes part in the formation of red blood cells. It was also proven that the growth and metabolism of cancer cells, due to the elevated angiogenesis, require more copper 87 – 90 . As copper is also strictly associated with the progress of inflammation process, including the cytokines production, during inflammation increased levels of the element are observed in serum 87 . Probably, just ongoing in kidneys and lungs inflammation process caused the increase of the copper level in these organs in case of animals subjected to T98G cells implantation. For the U group the mentioned effect was noticed only for lungs. In turn, both experimental groups were characterized by decreased copper concentration in spleen, which at least in case of U group, may be explained by anemia.
The role and activity of Zn in the body are tightly bound to those of Fe and Cu. Zn is crucial for the development and function of immune cells, hematopoiesis, cell signaling and the inflammatory response. It also participates in DNA synthesis and protein production 91 , 92 . Mehrian-Shai et al . found that the protein p53, which has a suppressive influence on cancers, is activated by zinc 93 . Thus, the detected zinc deficiency could be associated with the modulation of immune system activity by the developing GBM. In this context, an anemia of chronic disease and the problems with nutrients absorbtion should also be taken into account 92 – 95 . In turn, the increased level of zinc detected in lungs of animals representing both experimental groups may point at the occurrence of the organ oxidative stress, but at those stage of investigation the results are inconclusive 66 , 96 .
Selenium is a microelement important for human body from the point of view of the antioxidant processes and their role in the immune system response. Its presence in proteins is connected with the amino acid of selenocysteine 97 . Zhang et al . noticed that the level of the element is higher within in the region of brain tumor comparing to the healthy tissue 98 . Similar observations were done during our previous study, analyzing brain samples taken from the area of glioma cells implantation 35 . The increased demand and accumulation of selenium in GBM tissue may be the cause of its low serum level. In turn, the mentioned effect, together with the anemia of chronic disease, may explain the detected depletion of selenium in the kidneys and spleen of rats from U group 97 – 99 . On the other hand, an elevated content of the element in lungs, in case of both experimental groups, and in heart of animals from T group, could be associated with the selenium-driven enhancement of cell-mediated and humoral immunity 97 – 100 . The increased content of selenium in heart may be also explained by the production of antioxidative agents, containing i.e. selenium, in the answer of ROS travelling through the coronary system to regions highly supplied with blood 99 – 102 .
Summarizing, the conducted studies have shown that the implantation of human glioma cells into the rat brain is associated with a number of elemental anomalies in distant organs. These changes occur even when there is no tumor developing in the brain. The observed disorders of element homeostasis may result from many processes occurring in the body as a result of implantation of cancer cells or the development of GBM, including inflammation, anemia of chronic disease or changes in iron metabolism. The evaluation of the anomalies detected in the 3 laboratories participating in the inter-comparison study showed a good agreement between obtained results. When all the experimental data were taken into account, the full compliance of the results was observed for 72% of cases. In case of elements with higher Z, it was better, as the full compliance of the results was found for 88% of cases. | Discussion and conclusions
The cancerogenesis process occurring in the brain may influence the response of the immune system also in other parts of the body and it may manifest itself with the changes in the elemental composition of tissues 36 – 38 . The elemental anomalies of various tissues and body organs associated with the occurrence of neoplastic processes are still at the stage of discovery and characterization 29 – 33 . Therefore, the aim of this investigation was identification of element abnormalities that appear in far distant organs as a result of implantation of human GBM cells into rat brain and/or subsequent tumor development. To achieve this goal, the kidneys, hearts, spleens and lungs were taken from animals subjected to implantation of T98G (described in literature as non-tumorigenic) and U87MG glioma cells (known as tumorigenic) and control rats 39 – 41 . The digested samples of organs were then measured in three European laboratories equipped with different commercially available TXRF spectrometers. The concentrations of elements, including P, S, K, Ca, Fe, Cu, Zn and Se, were determined using internal standard method and Ga was used for this purpose. To evaluate statistical significance of differences between experimental animals and normal rats, non-parametric Mann–Whitney U test was applied 42 independently in each laboratory participating in the investigation. The following discussion and conclusions, based on the literature data obtained for samples of human and animal origin, relate to the results agreed upon for the three laboratories participating in the comparative study.
Phosphorus, as building element of nucleic acids, phospholipids, phosphoproteins or ATP, is involved in numerous processes occurring in cells and tissues 43 – 45 . Hubersch et al . , Srivastava et al . and Planeta et al . shown that cancerous GBM tissue is characterized by diminished concentration of this element, which is probably related to the decrease of phospholipids (i.e. lecithin, sphingomyelin) level within tumor 35 , 46 , 47 . The present study showed decreased concentration of P in most of the examined organs. The exception from this rule was elevated level of the element found in the lungs of animals from the U group. Decreased content of P observed in various organs of experimental animals may be an effect of hypophosphatemia, which, in turn, could result from kidneys dysfunction 48 . Such a conclusion seems to be supported by large element imbalance observed for this organ, especially in animals subjected to T98G cells implantation.
Fibroblast growth factor 23 (FGF23) is a protein and member of the fibroblast growth factor family, which participates in the regulation of phosphate level in plasma and the metabolism of vitamin D. It decreases reabsorption of phosphates in kidneys, allowing their excretion with urine. As it was shown by Bollenbecker et al . , FGF23 may be elevated during the systemic inflammation 49 . What is more, the higher level of FGF23 in lungs may lead to the elevated phosphate concentration within the organ 50 , 51 and probably this phenomenon is responsible for significantly higher P level found for lungs taken from animals representing U group. It is necessary to mention that similar relation, but in this case not statistically relevant, was observed also for rats from T group.
Ca metabolism is closely related to the one of P. The linked homeostasis of both elements is crucial for proper functionality of neuromuscular system or mineralization process 52 – 56 . There is also an evidence for Ca playing an important role in cell signaling during the process of proliferation, and the Ca channels and Ca-regulated proteins show diverse and interconnected roles in the shaping of GBM biology and promotion of tumor growth 57 . The depletion of calcium ions level in the endoplasmic reticulum of glioma cells triggers their influx across the plasma membrane from intracellular space 57 – 60 . On a macro-scale, it causes imbalances in Ca level and the occurrence of the cell competition mechanism 57 . The elevated level of Ca in serum, called hypercalcemia, may occur in patients with neural tumors. However, it is described to be associated rather with astrocytic tumors than gliomas 57 – 60 . Our results showed relevantly decreased concentration of Ca in hearts of animals subjected to T98G cells implantation, and analogical trend for those implanted with U87MG cells. According to Shah et al . , the hypocalcemia may be caused by low Ca concentration in serum, which translates into the diminished level of the element in heart 61 . This in turn, may alter the flow of calcium through the voltage-gated cardiac calcium channels and lead to cardiac diseases 60 – 62 .
Sulphur is one of the most abundant mineral in the human body and its presence is related to the sulphur-containing amino acids—methionine, cysteine, cystine, homocysteine, homocystine, and taurine 63 , 64 . The occurrence and progress of GBM within brain is connected with the activity of γ-cystathionase and iron-sulphur centres of redox proteins 65 . M. Wróbel et al . observed the accumulation of sulphane sulphur in human gliomas and points to its importance for malignant cells proliferation and tumour growth. They linked this process with the diminished activity of γ-cystathionase. They showed also the correlation between the amounts of sulphane sulphur and the stage of the malignancy. High level of sulphane sulphur and a high GSH/GSSG ratio could result in elevated levels of hydrogen sulphide, that is often connected with the increase in malignancy of tumors 65 .
Iron-sulphur centers of redox proteins, consisting of Fe–S clusters, mediate electron transfer in the mitochondrial respiratory complexes. They are vital for the production and consumption of the energy in cells and are involved in the generation of reactive oxygen species (ROS) 66 , 67 . The organ affected by the cancerous process is experiencing the so-called Warburg effect—even in the presence of oxygen and properly functioning mitochondria, the glucose uptake radically increases in cells and lactate is produced 66 – 69 . The neoplasm present in the body states a great challenge and load for the immune system. Its response to any infection or an abnormality appearing in an organism is connected with the cytokines release. Their role is to alarm and affect the growth, proliferation and stimulation of cells involved in the immune response and also haemopoietic cells 68 . The cytokines activity, great energy requirements and altered energy metabolism in the neoplastic regions cause that the rest of the organism works at a minimum level necessary to maintain only the main of its functions. This reduced requirement and use of energy by remaining organs may be the reason of the diminished sulphur level observed in them 68 , 70 . The increased sulphur concentration in lungs of experimental rats may be, in turn, assigned to the pulmonary surfactant lining the alveoli. Han et al . pointed at its importance to the prevention of the dissemination and elimination of pathogens as well as modulating of the immune responses 71 . The overproduction of surfactant occurred in animals subjected to glioma cells implantation, probably, in the response to the ongoing inflammation in their respiratory system 71 – 73 .
Potassium is one of the most important electrolytes in the human body, as it is involved in maintaining the integrity of the skeleton, regulation of muscle contraction, blood pressure and nerve transmission. It is essential for the proper activity of all cells 74 , 75 . Hyperkalemia is a common disorder in patients suffering from cancers. The source of modifications of potassium channels and transmembrane proteins that mediate potassium within cells, may be different for various patients and may include renal failure, decreased potassium secretion and enhanced chloride reabsorption 76 – 79 . Low potassium in the spleen detected in our study may be related to the anemia caused by tumors progression. Despite the appropriate level of potassium in erythrocytes, the lower number of red blood cells undergoing lysis in the spleen may translate into the reduced level of the element within the organ 74 , 78 , 79 .
Iron is an essential metal in the human body. It is present in hemoglobin, myoglobin and various enzymes. The same, it takes part in oxygen transport, its storage in muscles, as well as energy production and regulation of diverse cell functions, including proliferation 80 – 82 . As mentioned above, it is also a building element of Fe-S centers of redox proteins mediating electron transfer in the mitochondrial respiratory complexes 66 , 67 , 82 . The high iron content detected in kidneys, hearts and lungs of animals representing T group may be associated with the altered iron metabolism and increased blood ferritin level connected with the process of tumorigenesis 83 – 85 . The mentioned above increased iron level was not observed in organs taken from animals subjected to the implantation of the most invasive glioma cell line U87MG. In rats from U group, the development of massive tumor within the brain was observed and their very poor health conditions made it necessary to shorten the experimental time. These animals presented the lowered iron content in the spleen, probably related to the anemia which is frequently observed in patients suffering from GBM. The anemia in GBM patients may be associated both with a reduced number of reticulocytes and a lowered iron-binding capacity. Therefore, the diminished concentration of iron in the spleen of rats from U group, was probably an effect of the reduced number of low-iron erythrocytes hemolysis in this organ 84 .
In human body, copper is required for the proper action of enzymes involved in aerobic metabolism, such as cytochrome c oxidase in the mitochondria, dopamine monooxygenase in the brain, lysyl oxidase in connective tissue and ceruloplasmin 86 . Together with iron, this element takes part in the formation of red blood cells. It was also proven that the growth and metabolism of cancer cells, due to the elevated angiogenesis, require more copper 87 – 90 . As copper is also strictly associated with the progress of inflammation process, including the cytokines production, during inflammation increased levels of the element are observed in serum 87 . Probably, just ongoing in kidneys and lungs inflammation process caused the increase of the copper level in these organs in case of animals subjected to T98G cells implantation. For the U group the mentioned effect was noticed only for lungs. In turn, both experimental groups were characterized by decreased copper concentration in spleen, which at least in case of U group, may be explained by anemia.
The role and activity of Zn in the body are tightly bound to those of Fe and Cu. Zn is crucial for the development and function of immune cells, hematopoiesis, cell signaling and the inflammatory response. It also participates in DNA synthesis and protein production 91 , 92 . Mehrian-Shai et al . found that the protein p53, which has a suppressive influence on cancers, is activated by zinc 93 . Thus, the detected zinc deficiency could be associated with the modulation of immune system activity by the developing GBM. In this context, an anemia of chronic disease and the problems with nutrients absorbtion should also be taken into account 92 – 95 . In turn, the increased level of zinc detected in lungs of animals representing both experimental groups may point at the occurrence of the organ oxidative stress, but at those stage of investigation the results are inconclusive 66 , 96 .
Selenium is a microelement important for human body from the point of view of the antioxidant processes and their role in the immune system response. Its presence in proteins is connected with the amino acid of selenocysteine 97 . Zhang et al . noticed that the level of the element is higher within in the region of brain tumor comparing to the healthy tissue 98 . Similar observations were done during our previous study, analyzing brain samples taken from the area of glioma cells implantation 35 . The increased demand and accumulation of selenium in GBM tissue may be the cause of its low serum level. In turn, the mentioned effect, together with the anemia of chronic disease, may explain the detected depletion of selenium in the kidneys and spleen of rats from U group 97 – 99 . On the other hand, an elevated content of the element in lungs, in case of both experimental groups, and in heart of animals from T group, could be associated with the selenium-driven enhancement of cell-mediated and humoral immunity 97 – 100 . The increased content of selenium in heart may be also explained by the production of antioxidative agents, containing i.e. selenium, in the answer of ROS travelling through the coronary system to regions highly supplied with blood 99 – 102 .
Summarizing, the conducted studies have shown that the implantation of human glioma cells into the rat brain is associated with a number of elemental anomalies in distant organs. These changes occur even when there is no tumor developing in the brain. The observed disorders of element homeostasis may result from many processes occurring in the body as a result of implantation of cancer cells or the development of GBM, including inflammation, anemia of chronic disease or changes in iron metabolism. The evaluation of the anomalies detected in the 3 laboratories participating in the inter-comparison study showed a good agreement between obtained results. When all the experimental data were taken into account, the full compliance of the results was observed for 72% of cases. In case of elements with higher Z, it was better, as the full compliance of the results was found for 88% of cases. | Glioblastoma (GBM) is a fast-growing and aggressive brain tumor which invades the nearby brain tissue but generally does not spread to the distant organs. Nonetheless, if untreated, GBM can result in patient death in time even less than few months from the diagnosis. The influence of the tumor progress on organs other than brain is obvious but still not well described. Therefore, we examined the elemental abnormalities appearing in selected body organs (kidney, heart, spleen, lung) in two rat models of GBM. The animals used for the study were subjected to the implantation of human GBM cell lines (U87MG and T98G) characterized by different levels of invasiveness. The elemental analysis of digested organ samples was carried out using the total reflection X-ray fluorescence (TXRF) method, independently, in three European laboratories utilizing various commercially available TXRF spectrometers. The comparison of the data obtained for animals subjected to T98G and U87MG cells implantation showed a number of elemental anomalies in the examined organs. What is more, the abnormalities were found for rats even if neoplastic tumor did not develop in their brains. The most of alterations for both experimental groups were noted in the spleen and lungs, with the direction of the found element changes in these organs being the opposite. The observed disorders of element homeostasis may result from many processes occurring in the animal body as a result of implantation of cancer cells or the development of GBM, including inflammation, anemia of chronic disease or changes in iron metabolism. Tumor induced changes in organ elemental composition detected in cooperating laboratories were usually in a good agreement. In case of elements with higher atomic numbers (Fe, Cu, Zn and Se), 88% of the results were classified as fully compliant. Some discrepancies between the laboratories were found for lighter elements (P, S, K and Ca). However, also in this case, the obtained results fulfilled the requirements of full (the results from three laboratories were in agreement) or partial agreement (the results from two laboratories were in agreement).
Subject terms | Supplementary Information
| Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-024-51731-2.
Acknowledgements
The authors would like to acknowledge the contribution of the COST ACTION CA18130. This work was also partially financed by the Ministry of Education and Science of Poland, the statutory research fund no. N18/DBW/000018 of the Laboratory of Experimental Neuropathology (Institute of Zoology and Biomedical Research, Jagiellonian University) and the funds granted to the AGH University of Krakow in the frame of the “Excellence Initiative – Research University” project (Action 4: A system of university grants for research carried out with the participation of doctoral students and young scientists, PL-Joanna Chwiej).
Author contributions
A.W.: methodology, resources, investigation, validation, formal analysis, visualization, writing original draft; Z.S.: supervision, methodology, resources, reviewing manuscript; D.B.: resources, investigation, reviewing manuscript; R.F.-R.: resources, investigation, reviewing manuscript; E.M.: resources, investigation, reviewing manuscript; K.M.: methodology, investigation; P.W.: resources, reviewing manuscript; J.W.-M.: methodology, investigation; N.J.-O.: methodology, investigation; J.C.: conceptualization, supervision, methodology, resources, investigation, writing original draft, corresponding author.
Data availability
The datasets analysed during the current study available from the corresponding author on reasonable request.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:55 | Sci Rep. 2024 Jan 13; 14:1254 | oa_package/04/29/PMC10787745.tar.gz |
PMC10787746 | 38218852 | Introduction
A tumor of the sympathetic nervous system, neuroblastoma (NB) is one of the most common tumors in children [ 1 , 2 ]. The incidence of NB is estimated at 1.2 cases per 100,000 people, accounting for about 15% of all cancer deaths in children [ 3 – 6 ]. The survival rate of low- and medium-risk patients is close to 100%, but the 5-year survival rate of high-risk NB patients is lower than 50% [ 7 – 9 ]. Understanding the mechanism of NB is the key to its treatment; however, despite many advances over the past three decades, the elusive mechanism of NB carcinogenesis has been a difficult challenge for clinical and basic researchers [ 10 ].
Iron (Fe) is essential to cell proliferation [ 11 ]; tumor cells require more iron than normal cells in order to support the rapid growth of the neoplasm [ 12 ]. Ribonucleotide reductase (RNR) catalyzes the rate-limiting step in deoxynucleotide synthesis. The enzyme catalyzes the de novo synthesis of deoxynucleotide triphosphates (dNTPs), generating 2-deoxynucleotide through reduction of carbon atom 2 of 5-phosphate ribose; the formed deoxynucleotide is then used for DNA synthesis [ 13 ]. The activation of RNR is dependent on Fe, since the enzyme complex’ R2 subunit contains a tyrosyl radical that is stabilized by Fe. In addition, DNA polymerases, primers, and helicases that play important roles in DNA replication are dependent on Fe 2+ or iron-sulfur (Fe-S) clusters [ 12 , 14 ]. Thus, Fe may be considered a cofactor for tumor cell proliferation.
Cancer growth can be seen as an imbalance between cell gain and cell loss, with mutated tumor cells multiplying faster than they die [ 15 ]. Apoptosis is a key physiological mechanism that limits cell population expansion, either to maintain tissue homeostasis or to eliminate potentially harmful cells, such as those with DNA damage [ 16 ]. As a CREB/ATF family member, ATF3 is frequently up-regulated by a wide variety of intra- and extracellular stressors [ 17 ]. ATF3 plays a key role in regulating cell behavior by homo- or hetero-dimerizing with ATF members, activating or inhibiting downstream genes [ 18 ]. Several studies have shown that ATF3 plays an important role in apoptosis by regulating downstream signaling pathways, such as ERK1/2, JNK, P38, and NF-κB [ 19 , 20 ].
The type II transmembrane serine protease (TTSP) family is a class of proteolytic enzymes that are fixed to the cell membrane through the transmembrane region of the amino terminus [ 21 ]. The location of these proteins on the surface of cells puts them in a unique position to mediate signal transduction between cells and their surroundings, endowing this family of enzymes with important roles in many biological processes in mammals [ 22 ]. There are 17 TTSP members in humans. Tmprss6 is one of these and plays a key role in iron homeostasis by modulating hepcidin, a hepatic peptide hormone that binds to and downregulates ferroportin 1 (FPN1), the only known cellular iron transporter. Interestingly, Tmprss6 expression has been reported in breast and prostate cancers [ 23 , 24 ]; however, little is known about the molecular function of Tmprss6 in cancer. Here, we show that overexpression of Tmprss6 significantly inhibited the proliferation of neuro-2a cells, stimulating significant cell death. Our results identify Tmprss6 as a new target for inhibiting the growth of neuronal tumors. | Materials and methods
Reagents
The following reagents were used: Minimum Essential Medium (Invitrogen, Carlsbad, CA, USA), fetal calf serum (Invitrogen, USA), Nonessential Amino Acid Solution (Invitrogen, USA), TRIzol reagent (15596018, Invitrogen, USA), Tris and Glycine (Amresco, Washington, USA), Reverse Transcriptase MMLV, dNTP Mixture and Recombinant RNase Inhibitor (TaKaRa, saka-shi, Japan). The following antibodies were used: β-actin (1:10000, cw0096m, CWbio, Beijing, China), FtL (1:5000, ab109373, Abcam, SF, CA, USA), FtH (1:5000, ab183781, Abcam, USA), Hepcidin (1:5000, ab30760, Abcam, USA), Tmprss6 (1:10000, 12950-1-AP, Proteintech, Wuhan, China), FLAG (1:20000, 80010-1-RR, Proteintech, China), HJV (1:5000, 11758-1-AP, Proteintech, China), P-Smad1/5/8 (1:1000, #9516, Cell Signaling Technology, St. Louis, MA, USA), Smad1 (1:1000, #6944, Cell Signaling Technology, USA), Smad4 (1:1000, #46535, Cell Signaling Technology, USA), FPN1 (1:5000, MTPP11-S, ADI, San Antonio, Texas, USA), TfR1(1:5000, 13-6800, Invitrogen, USA), Bcl-2 (1:5000, 26593-1-AP, Proteintech, China), Bax (1:5000, 50599-2-Ig, Proteintech, China), Caspase3 (1:3000, #9662 S, Cell Signaling Technology, USA), Cleaved-caspase3 (1:3000, #9664, Cell Signaling Technology, USA), ATF3 (1:1000, ABP55330, Abbkine, Wuhan, China), P-p38 (1:2000, #4511, Cell Signaling Technology, USA), P38 (1:2000, #8690 S, Cell Signaling Technology, USA), ACSL4 (1:5000, ab155282, Abcam, USA), GPX4 (1:5000, ab125066, Abcam, USA), RIP1 (1:5000, 17519-1-AP, Proteintech, China), RIP3 (1:5000, 17563-1-AP, Proteintech, China), protein marker (26617, Thermo, Carlsbad, CA, USA), anti-rabbit IgG (1:20000, RPN4301, Amersham, London, UK), anti-mouse IgG (1:20000, RPN4201, Amersham, UK).
Cell culture
Neuro-2a (ATCC, NO. CCL131, WT group), SH-SY5Y (ATCC, NO. CRL2266), Vector pcDNA3.1-transfected cells (Vector group), Tmprss6-transfected cells (Tmprss6 group), Smad4-transfected cells (Smad4 group), Scrambled shRNA-transfected cells (Scrambled shRNA group), ATF3 shRNA-transfected cells (ATF3 shRNA group) and Tmprss6 shRNA-transfected cells (Tmprss6 shRNA group) were maintained in MEM supplemented with fetal calf serum (10%, vol/vol), nonessential amino acids (0.1 mM), glucose (4.5 mg/ml), penicillin (100 U/ml), and streptomycin (100 mg/ml) in humidified 5% CO 2 and 95% air at 37 °C. Vector and Tmprss6 cells were maintained in G418 (500 μg/ml) to select stable Tmprss6-transfected neuro-2a cells.
Cell transfection
Efficient cell transfection experiments were performed using Lipofectamine TM 3000 kits (L3000015, Invitrogen, USA), according to the manufacturer’s instructions. Briefly, neuro-2a cells were first inoculated in six-well plates and allowed to grow to a density of 70%–90%. Next, the plasmid DNA–liposome complexes were prepared, 2 μl Lipofectamine TM 3000 and 4 μl P3000 were added to every 2 μg of plasmid DNA, and then diluted with Opti-MEM medium. Finally, the DNA–liposome complex was added to the neuro-2a cells, which were placed in a 37 °C, 5% CO 2 tissue culture incubator for generally 48–72 h before evaluation for transfected gene expression.
Immunofluorescence assay
Cells were fixed in 4% paraformaldehyde for 1.5–2 h, the fix solution was discarded, and the cells were washed 3 times with 0.01 M phosphate-buffered saline (PBS) for 5 min. A 0.5% Triton-100 solution was then applied for a 10 min treatment, after which the samples were washed twice with 0.01 M PBS. Goat serum (diluted 1:10 with PBS) was added, after which the samples were incubated at 37 °C for 50 min. The primary antibody, diluted in PBS, was added drop-wise, and the samples were incubated at 4 °C overnight. The samples were returned to room temperature for 15 min and washed 3 times with 0.01 M PBS for 5 min. Rhodamine-labeled, goat anti-rabbit secondary antibodies (1:200) were added, and the samples were incubated at room temperature for 90 min. The samples were then washed 4 times with 0.01 M PBS for 5 min. DAPI (1:1000, diluted with PBS; 4 min) was used to stain the nuclei. Excess DAPI was removed by washing 6 times with 0.01 M PBS for 5 min. Images were acquired using a fluorescence confocal microscope (Olympus FV3000, Japan).
Western blot
For the extraction of total protein from tumor tissues from nude mice or neuro-2a cells, the samples were first placed into RIPA buffer and centrifuged at 12,000 × g for 20 min. For the nuclear and cytoplasmic protein isolation from neuro-2a cells, a nuclear and cytoplasmic protein extraction kit (P0027, Beyotime, Shanghai, China) was used according to the manufacturer’s instructions. Briefly, the cell samples were added to cytoplasmic protein extraction reagent A, violently shaken for 5 s, placed in an ice bath for 10–15 min, added to cytoplasmic protein extraction reagent B, violently shaken for 5 s, and placed in an ice bath for 1 min. After centrifugation at 12,000 × g for 5 min, the supernatant contained the cytoplasmic proteins. Nuclear protein extraction reagent was added to the precipitate, which was violently shaken for 15–30 s, placed in an ice bath for 2 min, and centrifuged at 12,000 × g for 10 min; the supernatant contained the nuclear protein. The protein supernatant in the above process was collected and quantified using a BCA kit (Kang Wei, China). The samples were resolved by SDS-PAGE (10–12% acrylamide), and then transferred to nitrocellulose membranes (Millipore, Bedford, MA, USA). The membranes were blocked with 5% skim milk in TBS-T for 1.5 h and then incubated with primary antibodies overnight at 4 °C. The membranes were washed with TBS-T buffer and then incubated for 90 min at room temperature with anti-rabbit or anti-mouse IgG conjugated with horseradish peroxide. After washing, immune reactive proteins were detected by the enhanced chemiluminescence (ECL) method.
Quantitative real-time reverse transcription-PCR (qRT-PCR)
Neuro-2a cells were homogenized with TRIzol reagent, extracted with chloroform, and precipitated with isopropyl alcohol, according to the manufacturer’s instructions. RNA was reverse transcribed with MMLV reverse transcriptase and Oligo-dT primers after being washed twice with 75% alcohol. SYBR green PCR Master Mix was used for PCR amplification. The cycle threshold (Ct) value for a given gene of interest was first normalized to β-actin in the same sample, and then the relative differences between the control and each of the other groups were calculated using equation 2 −ΔΔCt , and expressed as relative fold changes of the control group. The primer sequences used for amplification were as follows:
Tmprss6 forward: 5’- TTGCTGGTCTTGGCTGCGCT-3’
Tmprss6 reverse: 5’-AATGACGGTTGAGCACCCGGAG-3’
ATF3 forward: 5’-GCCAAGTGTCGAAACAAGAAAAAG-3’
ATF3 reverse: 5’-TCCTCGATCTGGGCCTTCAG-3’
Bnip3 forward: 5’-CCTGTCGCAGTTGGGTTC-3’
Bnip3 reverse: 5’-GAAGTGCAGTTCTACCCAGGAG-3’
β-actin forward: 5’- AGGCCCAGAGCAAGAGAGGTA -3’
β-actin reverse: 5’-TCTCCATGTCGTCCCAGTTG -3’
Immunoprecipitation (IP)
Non-denatured lysate (P0013, Bytotime, China) was added to the cell samples, which were then placed in an ice bath for 10 min and centrifuged (12,000 × g ). The supernatants were collected, protein A/G beads (coupled with FLAG or an IgG antibody) were added to the supernatants, and the samples were slowly shaken at 4 °C in a silent mixer overnight. On the second day after the immunoprecipitation reaction, the protein A/G beads were centrifuged for 5 min at 12,000 × g , and washed 3 times with pre-cooled PBS. After adding SDS-PAGE loading buffer, the samples were incubated in a 95 °C water bath 5 min and centrifuged. Finally, the supernatants were collected for western blot analysis.
Measurement of total cellular iron levels by ICP-MS
Total cellular iron levels were measured by ICP-MS using a previously described method [ 42 ]. Briefly, the cell samples were thermally digested in 70% nitric acid using a microwave method at an asymptotic temperature. After the digested samples were diluted, an Agilent 7500ce ICP-MS (Agilent Technologies, Santa Clara, CA) was used to determine the total iron content of the samples. An 8-point calibration curve was performed before sample analysis. At least 3 samples of each cell preparation were analyzed by ICP-MS. The total iron content of the sample was calculated by dry sample weight.
Fe 2+ content
Cytoplasmic ferrous iron content was assessed using FerroOrange (DojinDo, Kyushu Island, Japan). The assay does not detect ferric iron that is bound to proteins. After reduction to the ferrous form (Fe 2+ ), cytoplasm Fe 2+ (Cyto-Fe) reacts with probes to produce a stable colored complex. The cells were counterstained with Hoechst (1:1000, diluted with PBS) for 30 min at 37 °C. After washing the samples 3 times for 5 min with 0.01 M PBS, the fluorescence intensity was analyzed using a confocal microscope (Olympus FV3000, Japan).
RNR activity assay
RNR activity in neuro-2a cells was carried out using a kit from Mlbio company (NO. YJ151420, Shanghai, China), according to the manufacturer’s instructions.
Assessment of apoptosis by flow cytometry and TUNEL staining
TUNEL detection was performed using a TUNEL FITC Apoptosis Detection Kit (Vazyme Biotech CO., Nanjing, China), according to the manufacturer’s instructions. Briefly, tissue slides or neuro-2a cells were pretreated with 10 μg/ml proteinase K for 10 min and then incubated with the reaction mixture containing terminal deoxynucleotidyl transferase (TdT) and fluorescein-conjugated deoxyuridine triphosphate (dUTP) for 1 h at 37 °C. The nuclei were counterstained with DAPI, and images were acquired using a confocal microscope (Olympus FV3000, Japan).
Apoptosis was detected using a FITC-Annexin V apoptosis assay kit (#C1062L, Beyotime, China), according to the manufacturer’s instructions. Neuro-2a cells were collected and stained with annexin V at 37 °C for 10 min. Next, the samples were centrifuged at room temperature at 1000 × g for 5 min. After washing the cells twice with PBS, the samples were stained with propidium iodide (PI). The percentage of apoptotic cells was analyzed by flow cytometry (CytoFLEX, Beckman Coulter).
Cell cycle analysis
Cell cycle was assessed using a Cell Cycle Analysis Kit (#C1052, Beyotime, China), according to the manufacturer’s instructions. Neuro-2a cells were collected and stained with PI at 37 °C for 30 min, after which the samples were centrifuged at room temperature at 1000 × g for 5 min. The cells were washed twice with PBS, and the percentage of cells in each cell cycle was analyzed by flow cytometry (CytoFLEX, Beckman Coulter).
RNA sequencing
Total RNA was extracted using TRIzol. The mRNA was sequenced on the Illumina Hiseq platform. Differential expression analysis of experimental and control groups was performed using the DESeq2 R package (1.16.1). The data were transformed into a volcano plot. Gene Ontology (GO) analysis of differentially expressed genes was implemented using the cluster Profiler R package.
Allografts tumor growth in nude mice
Male, athymic Balb/c nu/nu mice, 4 weeks of age and free of specific pathogens, were acquired from Vital River Laboratory Animal Technology (Beijing, China). The mice were placed in sterile, microisolated cages in a 12-hour light/dark cycle environment in a specific pathogen-free facility. The animals had free access to pathogen-free water and food. 1 × 10 7 tumoral cells/ml (in 0.2 ml PBS) were injected subcutaneously into the mice. After becoming visible, tumor growth was observed weekly. Five weeks after injection, the mice were humanely killed, and the primary tumor volumes and weights were measured.
Statistical analysis
All experiments were performed at least in triplicate. Statistical analyses were conducted using GraphPad Software’s Prism 7 (GraphPad Software, USA). The values are reported as the mean ± SD. Two-group comparisons were conducted using the Student’s t test (two-tailed), while multi-group comparisons were conducted by One-way ANOVA with Tukey’s post hoc analysis. P values < 0.05 were considered statistically significant. | Results
Overexpression of Tmprss6 in neuro-2a cells
To explore the relationship between Tmprss6 and NB, we manipulated the levels of Tmprss6 in the NB cell line, neuro-2a [ 25 ]. We confirmed increases in Tmprss6 and FLAG expression in the cells by qRT-PCR and western blot analysis, respectively (Fig. 1A–D ), and the distribution of the overexpressed protein on the neuro-2a cell membrane by immunofluorescence staining (Fig. 1E ). These results indicate successful overexpression of Tmprss6 in neuro-2a cells.
Tmprss6 overexpression inhibits the Bmp-Smad signaling pathway and regulates the expression of hepcidin by cleaved HJV
To identify the role of Tmprss6 in neuro-2a cells, we evaluated the levels of HJV, a substrate of Tmprss6 [ 26 ]. As expected, Tmprss6 overexpression (Tmprss6 group) decreased the levels of HJV via cleavage of the proteins compared with the Vector and WT groups (Fig. 2A–C ). Since HJV is a co-receptor of Bmp [ 27 ], a decrease in HJV levels inhibits cytosolic Smad1/5/8 phosphorylation (P-Smad1/5/8) and significantly reduces the levels of P-Smad1/5/8 in the nucleus (Fig. 2D, E ). Nuclear translocation of P-Smad1/5/8 in the cytoplasm requires binding to Smad4. We found that, after Tmprss6 overexpression (Tmprss6 group), the expression of Smad4 in the cytoplasm was decreased, while the levels of the protein in the nucleus were increased compared with the Vector and WT groups (Fig. 2D, F ). The decreased P-Smad1/5/8 levels in the nucleus inhibited pro-hepcidin expression (Fig. 2G, H ). To investigate if the low levels of pro-hepcidin in neuro-2a cells regulated FPN1, possibly affecting intracellular iron content, we analyzed FPN1 levels by western blot analysis. As shown in Fig. 2G, H , FPN1 levels significantly increased in neuro-2a cells overexpressing Tmprss6 compared with the Vector and WT groups.
Overexpression of Tmprss6 decreases intracellular iron content by increasing the expression of FPN1, thus inhibiting RNR activity and preventing progression to the cell cycle S phase
FPN1, as the only known iron-exporter protein, plays an important role in the regulation of intracellular iron [ 28 ]. As shown in Fig. 2I, J , compared to the control groups, the levels of FtH and FtL, the subunits of ferritin, a ubiquitous iron storage protein, were significantly decreased, while the levels of TfR1 protein, the gateway to cellular iron uptake, were significantly increased in neuro-2a cells with elevated Tmprss6. Consistent with these results, we found that the intracellular total iron content (Fig. 2K ) and Fe 2+ content (Fig. 2L, M ) decreased significantly in the Tmprss6 group. Given these indication that the cells were iron starved, we proceeded to evaluate RNR activity to see if the low levels of iron limited RNR function. The RNR activity was indeed significantly decreased in the Tmprss6 group (Fig. 2N ). The decrease of RNR activity coincided with cell cycle arrest in the Tmprss6 group, where there was a significant decrease in cells in the S phase compared with the Vector and WT groups (Fig. 2O, P ). These results suggest that Tmprss6 can slow cell proliferation by decreasing their on available for RNR activity.
Overexpression of Tmprss6 induces apoptosis in neuro-2a cells
In our cell culture experiments, we were surprised to find that Tmprss6 overexpression not only inhibited cell proliferation, but also stimulated cell death. Therefore, we proceeded to examine which forms of cell death, including apoptosis, necrosis, and ferroptosis, may have been occurring. Our results demonstrate that Tmprss6 overexpression was not associated with ferroptosis (Supplementary Fig. S2A, B ) or necrosis (supplementary Fig. S2C, D ), but was closely associated with apoptosis (Fig. 3 ). As shown in the annexin V assay results in Fig. 3A, B , the percentage of apoptotic cells was ~1% in the Vector and WT groups, while the Tmprss6 group was increased significantly, by about 12-fold. TUNEL assay also revealed that Tmprss6 overexpression caused a significant increase in apoptotic bodies compared to the Vector and WT groups (Fig. 3C ). Finally, we evaluated the expression of Bcl-2, Bax, and cleaved-caspase3 to find that the ratio of Bcl-2/Bax ratio was significantly decreased (Fig. 3D, E ), while cleaved-caspase3 levels (Fig. 3D, F ) significantly increased in the Tmprss6 group, compared with the Vector and WT groups. Together these results indicate that the cell death caused by Tmprss6 overexpression was due to apoptosis.
Tmprss6 overexpression-mediated apoptosis in neuro-2a cells is due to activation of the ATF3/P38 signaling pathway
To explore how Tmprss6 overexpression induces apoptosis, we used RNA sequencing to screen for changes in apoptosis-stimulating gene expression. Compared with the Vector group, Tmprss6 overexpression significantly up-regulated the expression of ATF3 and Bnip3 in the volcano diagrams (Fig. 4A ). The most closely associated with the changes in gene expression are Bmp signaling pathways in the GO analysis in Fig. 4B . We validated the changes in ATF3 and BCL2/ adenovirus E1B 19 kDa interacting 3 (Bnip3) gene expression by qRT-PCR (Fig. 4C ). We also examined ATF3 protein levels and found them to be increased (Fig. 4D, E ). To investigate whether overexpression of Tmprss6 causes nuclear translocation of ATF3, we performed western blot analysis on cytoplasmic and nuclear cell isolates. As shown in Fig. 4F, G , compared with the controls, there was a shift from the cytoplasm to the nucleus in the intracellular distribution of ATF3 after Tmprss6 overexpression. The increased nuclear translocation of ATF3 in the Tmprss6 group was also apparent in immunofluorescence experiments (Fig. 4H ). We also found an increase in phosphorylated p38 (Fig. 4I, J )—the nuclear translocation of ATF3 is known to activate p38 mitogen-activated protein kinases, ultimately leading to apoptosis [ 29 ].
Overexpression of Tmprss6 decreases iron levels to inhibit RNR activity and mediate apoptosis in SH-SY5Y cells
In addition, we also investigated the downstream effects of Tmprss6 activity in human neuroblastoma cell lines, SH-SY5Y. As shown in Fig. 5A, B , we overexpressed Tmprss6 (Tmprss6 group) in the SH-SY5Y cells. We then assessed the levels of HJV, P-Smad1/5/8, Smad4, ATF3, Pro-hepcidin, FPN1, TfR1, FtL, FtH, RNR activity, P-p38, Bcl-2, Bax and Cleaved-caspase3 (Fig. 5C–R ) in the Tmprss6-overexpressing cells. Although the ratio of the changes was somewhat different, these results are consistent with our findings in neuro-2a cells overexpressing Tmprss6, demonstrating that the downstream effects of Tmprss6 activity in neuro-2a cells is also applicable to SH-SY5Y cells.
Overexpression of Smad4 induces nuclear translocation of ATF3
To explore the mechanism whereby Tmprss6 overexpression results in ATF3 nuclear translocation, we constructed a Smad4-FLAG overexpression plasmid to simulate the increased Smad4 expression in the nucleus by Tmprss6 overexpression (Tmprss6-FLAG is not expressed in the experiments in Fig. 6 ). As shown in Fig. 6A, B , compared with the controls, the increased expression of Smad4 was accompanied increased expression of ATF3. Overexpression of Smad4 also induced nuclear translocation of ATF3 compared to the Vector and WT groups (Fig. 6C–E ). We confirmed this shift of ATF3 to the nucleus by immunofluorescence experiments (Fig. 6F ). To explore the molecular mechanism of the ATF3 nuclear translocation caused by Smad4 overexpression, we performed immunoprecipitation experiments, the results of which suggest that an interaction between ATF3 and Smad4 occurs (Fig. 6G ). Thus, ATF3 nuclear transposition may be the consequence of its binding to Smad4.
Disruption of ATF3 expression alleviates apoptosis induced by overexpression of Tmprss6 in neuro-2a cells
To further confirm the role of ATF3 in Tmprss6 overexpression-mediated apoptosis, we inhibited the expression of ATF3 via shRNA. As shown in Fig. 7A–C , the expression of ATF3 was significantly inhibited in the ATF3-targeting short hairpin RNA (shRNA) group compared with the Scrambled shRNA and WT groups. Inhibition of ATF3 expression by the shRNA in Tmprss6-overexpressing cells decreased the ratio of P-p38/p38 (Fig. 7D, E ) and the levels of cleaved-caspase3 (Fig. 7H, I ), while the ratio of Bcl-2/Bax was elevated (Fig. 7F, G ). TUNEL assay also revealed that inhibition of ATF3 expression in Tmprss6-overexpressing cells significantly reduced the formation of apoptotic bodies (Fig. 7J ). These results confirm that ATF3 plays a central role in Tmprss6 overexpression-mediated apoptosis.
Overexpression of Tmprss6 inhibits tumor growth and initiates apoptosis
To examine whether Tmprss6 overexpression can affect NB tumor growth, we subcutaneously implanted neuro-2a cells into nude mice. The tumors derived from cells overexpressing Tmprss6 grew at a significantly lower rate than those in the Vector group (Fig. 8A ). We also measured the tumor growth, and the weights of the tumors. As shown in Fig. 8B, C , compared with the Vector group, all two of these values were significantly decreased in the Tmprss6 group, and the results of hematoxylin and eosin staining in tumor sections showed that the Vector group tumors are quite dense, while the tumor structure in the Tmprss6 group was looser and with lighter nuclear staining (Supplementary Fig. S3 ). We confirmed the continuous overexpression of Tmprss6 and FLAG in the process of tumor growth by western blot analysis (Fig. 8D, E ). We also used western blot analysis to examine the levels of ATF3, P-p38, p38, Bcl-2, Bax, and cleaved-caspase3 in tumor tissue (Fig. 8F–L ). Consistent with our cell culture experimental results, ATF3, P-p38, Bax, and cleaved-caspase3 expression increased significantly, while Bcl-2 expression decreased significantly in the Tmprss6 group. TUNEL staining further confirmed that overexpression of Tmprss6 induced the production of apoptotic bodies in tumor tissues (Fig. 8M ). These results suggest that overexpression of Tmprss6 inhibits tumor growth and initiates apoptosis through the ATF3/P38 signaling pathway. | Discussion
Proteolytic enzymes have long been thought to be involved in carcinogenesis, since proteolytic enzymes can hydrolyze the extracellular matrix (ECM), allowing cancer cells to escape the basement membrane and invade surrounding tissues [ 22 ]. Notably, cell surface proteases, such as Tmprss6, also activate a variety of growth factors and their associated receptors, which are essential for the activation of oncogenic signaling pathways [ 30 – 32 ]. So far, several members of the TTSP family have been linked to cancer progression [ 32 – 34 ].
Multiple clinical studies have shown that Tmprss6 levels are decreased in tumor progression, while low gene expression correlated with poor prognosis in triple-negative breast cancer [ 23 , 35 , 36 ]. Consistent with the clinical observations, overexpression of Tmprss6 has been found to inhibit the invasion and growth of breast and prostate cancer cells in both in vivo and in vitro experiments [ 36 , 37 ]. Webb et al. [ 37 ] suggested that Tmprss6 may inhibit the development of prostate cancer cells by reducing the levels of β-catenin in the tumor cell membrane. Knockout of Tmprss6 at the cellular level resulted in increased levels of β-catenin, while overexpression of Tmprss6 had the opposite effect. Although Tmprss6 is well known for its association with some types of cancer, surprisingly little is known about the mechanisms by which it is involved in the development and growth of cancer, especially in the molecular control of the cell cycle and apoptotic processes in tumor tissues.
Here, we evaluated the role of Tmprss6 in a neuroblastoma cell line and its derived tumors. Since elevated Tmprss6 interfered with cell cycle progression and triggered apoptosis, we continued to test the overexpression model by investigating the mechanism of neuro-2a growth inhibition. Previous studies have shown that Tmprss6 cleaves HJV, a BMP co-receptor, on the surface of hepatocytes, modulating the BMP/SMAD signaling pathway that influences HAMP expression [ 26 ]. Consistent with this, we observed that Tmprss6 overexpression in neuro-2a cells reduced the level of HJV by cleaved HJV (Fig. 2A–C ), thereby inhibiting the Bmp-Smad signaling pathway (Fig. 2D–F ), and reducing the expression of pro-hepcidin (Fig. 2G, H ). Meanwhile, we also tested the effects of Tmprss6 knockdown in neuro-2a cells, which significantly activated the Bmp-Smad signaling pathway (Supplementary Fig. S4 ). The decreased expression of pro-hepcidin decreased the total intracellular iron and Fe 2+ content by increasing the level of FPN1 (Fig. 2G–M ). The decrease in Fe 2+ content inhibited the activity of RNR, thus arresting the cell cycle ahead of the S phase, ultimately leading to inhibited tumoral cell proliferation (Fig. 2N–P ).
Apoptosis plays a key role in the pathogenesis of numerous diseases [ 38 ]. In neurodegenerative diseases, pathogenesis entails an excess of apoptosis [ 39 ], whereas in cancer, too little apoptosis can be the culprit, enabling the expansion of neoplastic cells [ 40 ]. The mechanism of apoptosis is complex and involves several pathways. Importantly, apoptosis is an important target in the treatment of cancer [ 41 ]. In our study, we found that Tmprss6 overexpression in neuro-2a cells leads to a decrease in proliferation by stimulating apoptosis (Fig. 3 ), without affecting ferroptosis or necrosis (supplementary Fig. S2 ). RNA sequencing revealed that Tmprss6 overexpression led to increased levels of ATF3 and Bnip3, while GO analysis showed functional enrichment of Bmp pathways (Fig. 4A–B ). Meanwhile, we confirmed there was increased expression of ATF3 and Bnip3 (Fig. 4C–E ). Elevated ATF3 levels promote nuclear translocation and activate downstream signaling via phosphorylation of p38, which mediates apoptosis (Fig. 4F–J ).
We also found that the expression and nuclear translocation of ATF3 was increased in neuro-2a cells when Smad4 was overexpressed (Fig. 6A–F ), which is likely the result of an interaction between the two proteins, as we demonstrated by immunoprecipitation experiments (Fig. 6G ). Thus, Smad4 not only assists in the nuclear translocation of P-Smad1/5/8, but also stimulates the nuclear translocation of ATF3. We conjecture that since the overexpression of Tmprss6 inhibits the phosphorylation of Smad1/5/8, the amount of Smad4 bound to P-Smad1/5/8 is accordingly decreased, freeing up Smad4 to promote an increased nuclear translocation of ATF3.
Figure 9 presents a schematic representation of the possible mechanism of Tmprss6-mediated inhibition of tumor growth. We propose Tmprss6 as a new candidate target for inhibiting neuronal tumor cell proliferation and mediating apoptosis in cancer. | Transmembrane serine protease 6 (Tmprss6) has been correlated with the occurrence and progression of tumors, but any specific molecular mechanism linking the enzyme to oncogenesis has remained elusive thus far. In the present study, we found that Tmprss6 markedly inhibited mouse neuroblastoma N2a (neuro-2a) cell proliferation and tumor growth in nude mice. Tmprss6 inhibits Smad1/5/8 phosphorylation by cleaving the bone morphogenetic protein (BMP) co-receptor, hemojuvelin (HJV). Ordinarily, phosphorylated Smad1/5/8 binds to Smad4 for nuclear translocation, which stimulates the expression of hepcidin, ultimately decreasing the export of iron through ferroportin 1 (FPN1). The decrease in cellular iron levels in neuro-2a cells with elevated Tmprss6 expression limited the availability of the metal forribo nucleotide reductase activity, thereby arresting the cell cycle prior to S phase. Interestingly, Smad4 promoted nuclear translocation of activating transcription factor 3 (ATF3) to activate the p38 mitogen-activated protein kinases signaling pathway by binding to ATF3, inducing apoptosis of neuro-2a cells and inhibiting tumor growth. Disruption of ATF3 expression significantly decreased apoptosis in Tmprss6 overexpressed neuro-2a cells. Our study describes a mechanism whereby Tmprss6 regulates the cell cycle and apoptosis. Thus, we propose Tmprss6 as a candidate target for inhibiting neuronal tumor growth.
Subject terms | Supplementary information
| Supplementary information
The online version contains supplementary material available at 10.1038/s41419-024-06442-x.
Acknowledgements
This study was funded by the National Natural Science Foundation of China (grant number 31471035), Foundations of the Key Laboratory of Animal Physiology, Biochemistry and Molecular Biology of Hebei Province, Ministry of Education Key Laboratory of Molecular and Cellular Biology, Hebei Collaborative Innovation Center for Eco-Environment, China and Hebei Research Center of the Basic Discipline of Cell Biology (C2023205049).
Author contributions
Y-ZC and YZ conceived and designed the experiments. YZ, JB, HB, ST, and HS performed the experiments. YZ, ZS, PY, GG, and YL completed the statistical analysis of the data. Yong Zuo wrote the manuscript with help from Y-ZC.
Data availability
Additional data can be found in the Supplementary materials. The remaining datasets and material generated in the study are available from the corresponding authors upon reasonable request.
Competing interests
The authors declare no competing interest.
Ethical approval
All procedures were carried out in accordance with the Guide for the Care and Use of Laboratory Animals issued by the National Institutes of Health and were approved by the Animal Care and Use Committee of the Hebei Science and Technical Bureau in the PRC. | CC BY | no | 2024-01-15 23:41:55 | Cell Death Dis. 2024 Jan 13; 15(1):49 | oa_package/96/d1/PMC10787746.tar.gz |
|
PMC10787747 | 38218732 | Introduction
Wafer bonding technology has evolved significantly, particularly in the field of electronic applications manufactured at the wafer level to achieve high throughput. This advancement is especially critical for the packaging and assembly of MEMS devices, optical systems, and CMOS applications. Among these, wafer bonding of Si and Si-based materials is widely adapted for various applications including the packaging of MEMS sensors 1 , optical system sealing 2 , microchannel devices 3 , and 3D stacking of memory and logic devices 4 .
The formation of a layer at the bonding interface is the fundamental and highly effective approach for wafer bonding. One of the representing methods in this regard is hydrophilic bonding, where the Si surface containing physisorbed water and chemisorbed OH groups forms hydrogen and molecular bonds at the bonding interface 5 , 6 . In order to enhance the bonding quality and lower the process temperature, plasma treatments typically using or gas are widely employed to render Si bonding surfaces hydrophilic, facilitating water adsorption. Subsequent post-bonding annealing at 200-600 promotes Si oxidation at the bonding interface, thereby increasing the bond strength 7 , 8 .
In the field of MEMS packaging, glass frit bonding is also widely adapted through the formation of an interface using low-melting-point glass pastes 9 , 10 . Here, the bonding surfaces are coated with such glass pastes, and thermo compression bonding at 350-600 induces adhesion between the substrates through the reflowed glass layer. Hence, the formation of the layer at the bonding interface is pivotal for ensuring mechanical reliability.
However, the bonding process temperature is a crucial factor in industrial applications. Elevated temperatures can deteriorate device performance and the bonding interface’s reliability due to mismatches in the coefficient of thermal expansion between substrates. Consequently, low-temperature bonding, particularly at room temperature, is needed to accommodate next-generation electronics.
Regarding to the formation of , the conversion of polysilazane into is a distinctive characteristic, represented by perhydropolysilazane (PHPS) , which undergoes hydrolysis when it interacts with water. This reaction leads to the formation of , along with the release of byproducts such as and ( ) 11 , 12 .
PHPS can be converted into through various methods, including annealing within the temperature range of 200-1000 12 , 13 , infrared (IR) irradiation 14 – 16 , treatment with pH-controlled chemicals 17 , and exposure to moisture 18 , 19 . Since PHPS forms membrane by simple approaches compared to vacuum process such as chemical vapor deposition (CVD), PHPS is widely adapted to the coating of polymer films and metals 20 – 22 and interlayer dielectrics 23 , 24 .
It is worth noting that the PHPS conversion into proceeds even at room temperature in the presence of water 11 , 13 , 25 , 26 . Hence, the conversion of PHPS to holds the potential for creating a bonding interface consisting of at room temperature, by introducing at the PHPS interface, for instance, through the use of plasma hydrophilic treatment.
Based on this concept, this study proposes a novel approach of room temperature wafer bonding by forming bonding interface utilizing the PHPS conversion. The plasma hydrophilic treatment is applied to the Si wafer surface or the PHPS layer, thereby introducing adsorbed water to facilitate the PHPS conversion at the bonding interface. This is followed by wafer bonding via PHPS, leading to the formation of a bonding interface at room temperature.
To investigate the proposed wafer bonding process via PHPS, two approaches were explored. The first approach involved utilizing the PHPS layer as a source of water and oxygen for the conversion of the PHPS bonding interface. In this case, the PHPS layers on the bonding pair wafers were directly subjected to plasma hydrophilic treatment. Wafer bonding was then performed via both side PHPS layers on the both of bonding pair wafers. The effects of plasma treatment were examined by comparing conditions where both sides, one side, or none of the PHPS layers underwent plasma treatment.
In the second approach, plasma treatment was applied to the Si wafer surfaces. This scenario involved plasma hydrophilic treatment that facilitated water adsorption on the Si wafer surfaces, followed by the PHPS coating on the Si wafers. This implies that the PHPS receives water not from the ambient air but from the Si wafer surface. Subsequently, wafer bonding was performed via one side PHPS layer using the PHPS-coated wafer and the plasma-treated Si wafer. For comparison, wafer bonding was also performed via one side PHPS layer without plasma treatment to the wafer surface. | Methods
4 inch Si wafers, with a thickness of 525 m, were utilized for the wafer bonding experiments. A 20 wt% solution of PHPS in dibutyl ether solvent was purchased.
In the case of the wafer bonding via both side PHPS layers, first, the wafers were coated with PHPS solution by spin coating at 2000 rpm for 20 s. The coated PHPS layers underwent a baking process on a hot plate at 100 for 5 min to volatilize the solvent. This baking step completely removes the solvent from the PHPS layer 32 , 43 , that is also supported by XPS and EDX analysis (data not shown). Sequentially, the PHPS layers were treated with plasma with 150 W power for 60 s to induce hydrophilicity on the PHPS layer’s surface. After the surface treatment, the PHPS-coated surfaces of the Si wafers were manually brought into contact under ambient air conditions at room temperature. For comparison, wafer bonding was also conducted with plasma treatment applied to both sides of the PHPS layers, one side of the PHPS layers, and without any plasma treatment.
In the case of the wafer bonding via one side PHPS layer, one wafer of a bonding pair was coated with PHPS. First, the bonding pair of Si wafers were treated with plasma followed by plasma to enhance water adsorption 44 , 45 , with both treatments conducted at a power of 150 W for 60 s each. After the plasma treatment, the wafers underwent a DI water rinse and were subsequently spin-dried. Following these surface preparations, the PHPS solution was spin-coated onto the one of the treated Si wafer pair, and solvent volatilization was conducted under the same conditions as described earlier. Finally, the Si wafer coated with PHPS and the uncoated wafer were bonded together in the ambient air at room temperature. The wafer bonding was also performed without the plasma hydrophilic treatment of Si wafers for comparison.
The PHPS layer underwent analysis using XPS to investigate surface compositions. The bonding quality was evaluated by IR imaging of the bonded wafers and bond strength measurements using the blade test 46 . The properties of the cross-sectional bonding interface were observed using SEM with EDX. | Results
In order to investigate the PHPS conversion into at room temperature by plasma hydrophilic treatment, chemical components of the PHPS coating on wafers were analyzed by x-ray photoelectron spectroscopy (XPS). Figure 1 presents the XPS core spectra of the PHPS surface before and after plasma treatment. When the PHPS layer is just coated, oxygen peak is not observed, accompanied by a significant peak of nitrogen. The silicon peak at around 101.3 eV indicates silicon nitride 27 , 28 . This is attributed to the intrinsic composition of PHPS 16 , 29 , 30 . However, when the PHPS layer is treated with plasma, an increase in the oxygen peak and a decrease in the nitrogen peak are significant. Moreover, the silicon peak shifts towards a higher binding energy around 103.5 eV, indicating the presence of silicon oxide 27 , 28 . Since the plasma itself does not contain and , it is suggested that the water adsorbs on the PHPS surface after the plasma treatment. Subsequently, the PHPS layer consumes the adsorbed water present on its hydrophilic surface for the conversion reaction.
The stitched IR images of the typical bonded wafers with PHPS layers are depicted in Fig. 2 . These images reveal the presence of interfacial voids, which appear as patterns with interference fringes. Basically the bonding interface shows a good adhesion without significant interfacial voids except the bonding via both side PHPS layer with both side plasma treatment (Fig. 2 ( a )). Due to the high viscosity of PHPS before its conversion to , good adhesion is achieved through the PHPS layers. Additionally, it is indicated that submicron-sized particles can become embedded within the soft PHPS layers 17 , 31 .
Conversely, bonding via both sides plasma-treated PHPS layers results in large interfacial voids, as shown in Fig. 2 ( a ). As indicated by the XPS analysis, plasma treatment converts the PHPS surface to hard . However, since the spin-coated PHPS layer does not possess a smooth and flat surface, the converted surface does not fully compensate for surface asperities, leading to lower quality adhesion.
The results of the bond strength measurements obtained through blade insertion tests are presented in Fig. 3 . For the bonding via both side PHPS layers, the bond strength is low when both PHPS layers are subjected to plasma treatment or when plasma treatment is not performed, yielding bond strengths of 0.64 J/m and 0.30 J/m , respectively. In the scenario where both sides of the PHPS layers undergo plasma treatment, both surfaces are converted into with surface asperities. Consequently, weak adhesion and a low bond strength result, which aligns with the results of the IR observations. In cases where plasma treatment is not performed, an insufficient supply of water at the bonding interface prevents the PHPS layers from conversion, resulting in a low bond strength.
Conversely, when only one side of the PHPS layers is treated with plasma, a significantly improved bond strength of 5.54 J/m is achieved. In this case, the bonding interface features the adhesive PHPS layer in contact with the hydrophilic PHPS layer. The hydrophilic layer with adsorbed water facilitates the conversion of the untreated PHPS layer into . As a result, both strong adhesion and high bond strength are attained.
In the cases of the bondings via the single side PHPS layer, when wafer bonding is conducted without the plasma hydrophilic treatment, the resulting bond strength measures 1.07 J/m . In contrast, bonding with plasma treatment exhibits a significantly improved bond strength of 6.02 J/m . Consistent with the bonding using both side PHPS layers, the plasma treatment introduces adsorbed water to the PHPS layer, thereby promoting its conversion to a mechanically stable interface at room temperature.
Figure 4 (a) shows the cross-sectional scanning electron microscopic (SEM) images of the bonding interface for the Si wafers bonded using PHPS layers on both sides with one side subjected to plasma treatment. In the low magnification image, the bonding interface appears uniform and devoid of voids. At higher magnification, the bonding interface is seen to consist of PHPS layers, each with a thickness of 0.4 m, resulting in a total thickness of 0.8 m. For the bonding via single side PHPS layer with plasma treatment shown in Fig. 4 (b), a uniform and void-free bonding interface is also clearly visible, with the thickness of the PHPS layer measuring 0.4 m, consistent with the thickness observed in the bonding using double side PHPS layers.
Energy dispersive x-ray spectroscopy (EDX) analysis was performed across the bonding interface, and the results are presented in Fig. 5 . For the bonding via double side PHPS layers with one side plasma, the presence of O and N is detected at the interface of the PHPS layers, while the Si signal is relatively lower compared to the bulk Si area. This can be attributed to the lower density of Si atoms in the converted PHPS layers. Moreover, the O intensity appears to be uniformly distributed across the PHPS layers, while the N intensity is relatively weak on the left side of the PHPS layers but as strong as O on the right side.
This distribution of N intensity is attributed to the differing treatment of the two PHPS layers. Specifically, the left side PHPS layer in Fig. 5 (a) is treated with plasma, while the right side is not. The dominant O signal on the left side is consistent with the PHPS conversion by plasma treatment, while the strong N signal on the right side suggests that the PHPS layer in this region is partially converted to .
The EDX line profile across the bonding interface of the single PHPS layer is illustrated in Fig. 5 (b). At the PHPS layer area, the Si intensity is lower, while the presence of O and N is detected at the bonding interface. In comparison to the bonding utilizing both side PHPS layers as shown in Fig. 5 (a), the composition of the PHPS layer appears to distribute more uniformly across the bonding interface, and the N intensity is closer to the background level. This suggests a more uniform conversion of the PHPS layer to than the bonding via both side PHPS layers. Given that the single PHPS layer at the bonding interface is situated between hydrophilic Si surfaces, the adsorbed water diffuses into the PHPS layer from both sides. As a result, the conversion of PHPS to proceeds more uniformly than the bonding via both sides PHPS layers.
Furthermore, the N signal in EDX can be partly attributed to the byproducts of resulting from the PHPS conversion. Given that N is generally spread throughout the PHPS layers, it suggests the diffusion of and byproducts into the PHPS layer. As the converted PHPS into has an amorphous structure, both and are expected to diffuse similarly to .
To investigate the PHPS conversion at the bonding interface, XPS analysis was performed on the debonded surface for both the bondings using both side PHPS layers and a single side PHPS layer, with and without plasma treatment. The atomic ratio of O and N was calculated from the O1s and N1s peaks for each condition, as illustrated in Fig. 6 .
For bonding via both side PHPS layers without plasma treatment, the N ratio was 0.88, suggesting that the N in the PHPS molecules was slightly replaced by O from the natural adsorption of water on the Si wafers 32 . However, when one of the both side PHPS layers was treated with plasma, the N ratio decreased to 0.50, indicating that the adsorbed water from the plasma treatment replaced the N at the bonding interface.
On the other hand, the debonded surface with a single side PHPS layer also exhibited a high N ratio of 0.75 without plasma treatment. Similarly to bonding via both side PHPS layers, plasma treatment reduced the N ratio to 0.30 for bonding via a single side PHPS layer. As indicated by the cross-sectional EDX analysis, the single PHPS layer is sandwiched between the hydrophilic Si surfaces. The conversion of the single side PHPS layer proceeded further due to the relatively larger amount of water present at the bonding interface, as the amount of PHPS was half of that in the bonding via both side PHPS layers. Consequently, the replacement of N by O proceeded more largely compared to the bonding via double side PHPS layers. | Discussion
Based on the experimental results, we propose a room temperature bonding mechanism as illustrated in Fig. 7 . For bonding with both-side PHPS layers, the plasma treatment initiates the conversion of one PHPS layer to , facilitated by adsorbed water, as confirmed by the XPS analysis. Following the bonding of the plasma-treated PHPS layer with the untreated PHPS layer, the latter adheres to the substrates due to its ability to deform and compensate for surface asperities, as evidenced by IR imaging of the bonding interface. Subsequently, the water from the plasma-treated PHPS layer diffuses into the untreated PHPS layer, proceeding the hydrolysis of the untreated PHPS layer 29 . As the condensation polymerization of Si-OH groups proceeds and enhances the bond strength even at room temperature 6 , 33 , 34 , is formed at the bonding interface proceeds, resulting in a increased bond strength. The plasma-treated PHPS layer serves as a source of water and oxygen, supporting this conversion process, as indicated by cross-sectional EDX analysis.
In the case of bonding with a single PHPS layer, the PHPS layer is coated on the hydrophilic surface with adsorbed water, thus facilitating PHPS conversion including hydrolysis step starting from the rear side. It has also been reported that the adsorbed water on the substrates react with the PHPS layer coated on the substrate 32 . This conversion is driven by the diffusion of the adsorbed water originating from the Si surface into the PHPS layer. Concurrently, the PHPS layer serves as an adhesive during the bonding of the PHPS-coated wafer to the uncoated wafer. Given that the surface of the uncoated wafer is also hydrophilic due to adsorbed water, water similarly diffuses into the PHPS layer from the front side. Consequently, the PHPS layer, situated between two hydrophilic Si surfaces, undergoes more uniform conversion into compared to bonding via PHPS layers on both sides, as supported by the EDX analysis.
Compared to other wafer bonding techniques, the wafer bonding via PHPS is comparable to water glass bonding 35 – 37 and sol-gel bonding 38 – 40 . Both of these techniques involve bonding through a liquid precursor of , followed by an annealing step to create the bonding interface composed of . However, a fundamental distinction lies in the mechanism of transformation to . In sol-gel or water glass bonding, the conversion to occurs through condensation polymerization, necessitating annealing to eliminate from the bonding interface. In contrast, the proposed method utilizing a PHPS layer achieves the formation of the bonding interface through hydrolysis, allowing for bonding at room temperature without the need for annealing.
Regarding to the conventional direct bonding of Si or , wafer bonding via PHPS exhibits a comparable bonding quality. In conventional hydrophilic bonding of Si or with plasma hydrophilic treatment, the bond strength typically increases with post-bonding annealing. For Si/Si and / wafer bonding, the typical bond strength is less than 1 J/m without post-bonding annealing 6 , 41 . The bond strength increases to over 2 J/m after annealing at temperatures no lower than 300 . Despite the conventional hydrophilic wafer bonding requiring a heating step for a robust bonding interface, wafer bonding via PHPS achieves a high bond strength without annealing.
In addition, conventional hydrophilic wafer bonding is often associated with concerns about bubble formation at the bonding interface. Post-bonding annealing in hydrophilic bonding facilitates the condensation polymerization of Si-OH at the bonding interface. This process generates byproducts such as and residual , resulting in the formation of bubbles and voids at the bonding interface 41 , 42 . In contrast, wafer bonding via PHPS, as depicted in Fig. 2 , does not exhibit the formation of bubbles, contributing to an improved bonding quality. | Conclusions
In this paper, we investigated and demonstrated wafer bonding via PHPS at room temperature. Through the strategic use of plasma hydrophilic treatment on either the Si surface or the PHPS layer, we have effectively facilitated the conversion of PHPS into a robust bonding interface composed of . This approach achieves the bond strength exceeding 5 J/m and the void-free bonding interfaces, which are attributed to the combined effects of PHPS adhesion and formation. The XPS and EDX analysis also suggest the bonding mechanism that the adsorbed water by hydrophilic plasma treatment diffuses into the PHPS layer which results in the partial conversion of PHPS into , contributing to the robust bonding interface. This technique will provide a new approach for the wafer bonding at room temperature for electronics packaging. | Room temperature wafer bonding is a desirable approach for the packaging and assembly of diverse electronic devices. The formation of layer at the bonding interface is crucial for a reliable wafer bonding as represented by conventional bonding techniques such as hydrophilic bonding and glass frit bonding. This paper reports a novel concept of room temperature wafer bonding based on the conversion of polysilazane to at the bonding interface. As polysilazane is converted to by hydrolysis, in this work, adsorbed water is introduced to the bonding interface by plasma treatment, thereby facilitating the formation of at the wafer bonding interface. The experimental results indicate that the adsorbed water from the plasma treatment diffuses into the polysilazane layer and facilitates its hydrolysis and conversion. The proposed method demonstrates the successful wafer bonding at room temperature with high bond strength without interfacial voids. This technique will provide a new approach of bonding wafers at room temperature for electronics packaging.
Subject terms | Acknowledgements
This work is supported by the Amada foundation, Tokyo Ohka Foundation for The Promotion of Science and Technology, and JSPS KAKENHI Grant Number JP23K13554.
Author contributions
K.T. designed and carried out the experiments, analyzed the experimental results, and wrote the manuscript. K.T., T.S., and E.H. discussed results, commented on, and edited the manuscript.
Data availability
The datasets used and/or analyzed during the current study available from the corresponding author on reasonable request.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:56 | Sci Rep. 2024 Jan 13; 14:1267 | oa_package/5f/94/PMC10787747.tar.gz |
|
PMC10787748 | 38218950 | Introduction
Reproduction in an environment suitable for genome stability is important for the preservation of species. This requirement is more imperative in poikilotherms animals, in which gametes are directly exposed to undesirable temperatures. As a result, they may have evolved a system that rapidly adapts to such changes. In fish, for example, temperature is generally considered to be an important factor in determining the exact timing of gamete maturation and spawning 1 .
Inhibition of reproduction by high temperature is observed in a wide range of fish species 2 . High-temperature environment inhibits female ovulation and male spermatogenesis of Atlantic salmon ( Salmo salar ), which leads to arrest 3 . The suppression of ovulation by the high-temperature environment is due to decreased expression of 20β-hsd , which in turn inhibits maturation-inducing steroid conversion. However, the mechanism of spermatogenesis arrest with regard to the sensing of temperature in the testis remains unclear.
Testes in mammals as well as in fish show clear sensitivities to high-temperature environments. Mammalian testes exposed to temperature >37 °C display abnormal spermatogenesis by way of germ cell apoptosis through various signaling pathways 4 . A proposed model posits that high temperature causes damage to meiotic prophase I and damaged pachytene spermatocytes are eliminated at the checkpoints that monitor meiotic progression 5 . Checkpoints in multicellular organisms induce apoptosis in DNA-damaged cells and contribute to genome stability across generations. This mechanism contributes to the lower mutation rate of germ cells than somatic cells, which is important for stronger genome integrity 6 . The mitotic checkpoint functions also in zebrafish : high temperature applied to knockouts of Mps1 , which is required for cell cycle fidelity and genome stability leads to the failure of checkpoint and production of aneuploid sperm 7 .
Leydig cells are found in the interstitial tissue of the testis and regulate spermatogenesis and reproductive function through the secretion of steroid hormones such as testosterone 8 . In mammalian models, hyperthermic stimulation of the testis by an artificially induced cryptorchidism leads to a decrease in Leydig cell activity as well as in germ cells 9 . Reduced activity of Leydig cells leads to decreased steroid hormone secretion 10 , 11 . High temperature also induces apoptosis not only in germ cells but also in Leydig cells 12 . However, the molecular mechanism and the significance of apoptosis in Leydig cells induced by high-temperature stimulation remain unexplored. In particular, the causal relationship between reduced Leydig cell activity and abnormal spermatogenesis due to the high-temperature environment is not clear.
In this study, we provide evidence that Leydig cells in zebrafish undergo apoptosis prior to germ cells when placed in high temperatures, and Trpv4 is involved in this process. Furthermore, we show that the Leydig cell-specific apoptosis decreased the synthesis of steroid hormones, which in turn impaired the motility of sperm whose genome integrity is compromised. | Methods
Fish
All experiments using animals in this study followed the guidelines of Osaka Medical and Pharmaceutical University. Adult zebrafish and medaka were maintained at 28.5 °C on a 10 h light:14 h darkness photoperiod. Zebrafish were bred from a RIKEN WT background and medaka strain was OKcab. Male adults 6–12 months old were used for experiments.
Temperature stimulation
Up to four individuals were temperature-stimulated in W22 × D12 × H11 cm tanks with 1.8 l of water, and water changes were performed daily. The tanks were placed in a water bath at 35 °C and oxygen was supplied by aquarium air stones. Temperature stimulation was interrupted for feeding from 12:00 to 13:00 daily. Exposure of isolated sperm was for 20 min (Supplementary Fig. 9 ).
Histology
Testes were fixed in Bouin solution, and 4 μm plastic sections were prepared using Technobit 8100 (Heraeus Kulzer). Samples were stained with hematoxylin solution for 10 min (Muto pure chemicals, Tokyo, Japan), and stained with 1% eosin (Muto pure chemicals, Tokyo, Japan) for 30 s. The mounted samples were imaged using an Olympus DP74 camera (Evident, Tokyo, Japan).
qRT-PCR
RNA samples were extracted from testes. According to the manufacturer’s instructions, 0.5 μg RNA template was used for reverse transcription to synthesize cDNA using a first-strand cDNA synthesis kit (ReverTra Ace, TOYOBO, Tokyo, Japan). The qPCR primers are listed in Supplementary Table 1 . For amplification, the KOD SYBR qPCR/RT (TOYOBO, Tokyo, Japan) and ABI real-time system were used. Statistical significance was evaluated by two-tailed Student’s t -test.
In situ hybridization
RNA in situ hybridization (WISH) was carried out following a standard protocol 47 . The primers used to amplify the regions used for the probes are shown in Supplementary Table 1 . Briefly, testes were fixed in 4% paraformaldehyde in 1× PBS overnight. On the following day, samples were hybridized at 65 °C overnight. Samples were further incubated with anti-digoxigenin-AP antibody solution (1:2000) overnight at 4 °C and stained with NBT/BCIP. The stained samples were imaged using an Olympus DP74 camera (Evident). Double in situ hybridization was carried out following a standard protocol 48 . The first probe was labeled with DIG and the second probe was labeled with FITC. After the first staining, incubation was performed with 0.1 M glycine–HCl pH 2.2 for 40 min to remove alkaline phosphatase activity.
Immunohistochemistry
Immunofluorescence staining was performed using an anti-Cleaved caspase 3 antibody (#9661, Cell Signaling Technology, USA). Alexa 488 was used as the secondary antibody. Nuclear DNA was stained with 4’,6-diamidino-2-phenylindol (DAPI). Stained samples were imaged using Leica SP8 (Leica, Germany).
Targeted genetic disruption of Trpv4 by CRISPR/Cas9
CRISPR/Cas9 target sites in Trpv4 were searched using CRISPR/Cas9 target online predictor (crispr.direct) 49 . Selected target sites are shown in Supplementary Fig. 5 . Guide RNA was generated using a Guide-it sgRNA In Vitro Transcription kit (TAKARA, Tokyo, Japan). Cas9 mRNA (50 ng/μl) and sgRNA (25 ng/μl) were injected into one-cell stage embryos. Mutant allele was identified in F1 adult fish using the primer sets shown in Supplementary Table 1 and sequence analysis.
Sperm motility measurement
Zebrafish and medaka testes were diluted in fetal bovine serum (FBS). In zebrafish , sperm diluted in FBS were activated by adding an equal volume of breeding water immediately prior to measurement. Sperm motility was recorded as sequential 1024 × 1024-pixel phase-contrast images with a ×10 objective lens for 3 s at 300 frames per second (fps) using cell motion system SI8000 (Sony Corporation, Tokyo, Japan). Images were acquired with SI8000 view Software (Sony Corporation, Tokyo, Japan) and analyzed with SI8000R Analyzer Software. Appropriate amounts of 20β-S were added to the sperm suspension and incubated for 10 min before sperm motility was measured. 20β-S was purchased from Steraloids Inc. Statistical significance was evaluated by two-tailed Student’s t -test.
Flow cytometry
Testes were isolated from the adult zebrafish and dissociated by 0.2% collagenase. The cell suspension was fixated in 2% PFA and was stained with propidium iodide (PI) staining solution (50 μg/ml PI and 20 μg/ml RNase) for at least 10 min at room temperature in the dark. The cells were subsequently filtered through a 40-μm nylon mesh, and the suspension was analyzed using a FACSAria Fusion flow cytometer (BD Biosciences). Statistical significance was evaluated by two-tailed Student’s t -test.
Measurement of fertilization rate and survival rates in early embryogenesis
Embryos that developed past the epiboly stage were measured as fertilized eggs. Daily survival rates were measured relative to the number of eggs reaching the epiboly stage. Statistical significance was evaluated by two-tailed Student’s t -test.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. | Results
High temperature affects Leydig cells prior to germ cells
To determine the appropriate condition for high-temperature stimulation in adult zebrafish whose optimal temperature is 28–30 °C, we first examined fish survival during 2 weeks of incubation at 33–39 °C. When placed in water <34 °C, mortality was not observed. In contrast, temperature >35 °C caused the death of incubated fish. Based on this observation, 34 °C was chosen as the temperature of stimulus in subsequent analyses (Supplementary Fig. 1 ). No abnormalities in spermatogenesis or morphological changes in the testes were observed after 2 weeks of rearing at 29 °C in the identical setup (Supplementary Fig. 2 ).
To analyze the effect of temperature stimulation on testicular structures and spermatogenesis, sections of the testes were observed by HE staining (Fig. 1a , Supplementary Fig. 3a ). There are three developmental stages of spermatids: initial (E1), intermediate (E2), and final (E3) 13 . Temperature stimulation led to a decrease in spermatids, especially at E3 (circled in black lines), and the appearance of abnormal cells (circled in red lines). The abnormal cells had large nuclei relative to the cytoplasm and were different in appearance from any normal types of cells in germ cell development (Supplementary Fig. 4 ). Germ cells at pachytene, diplotene, and metaphase stages were present, but the number of spermatids was dramatically reduced in testis exposed to 34 °C. In normal zebrafish , germ cells at the identical differentiation stage are confined in a cyst 14 . In contrast, abnormal cells were observed within a single cyst containing meiotic germ cell at metaphase or spermatid after 1 and 3 days of exposure to 34 °C (Supplementary Fig. 4 ). After 7 days or 2 weeks at 34 °C, cysts with a mixture of normal and abnormal cells disappeared and only those with abnormal cells were observed (Supplementary Fig. 4 ). These results indicate that under high-temperature differentiation from metaphase to spermatid stages became abnormal.
In addition, there was a significant decrease in the interstitial tissue, which was observed as gaps between cysts. This tendency became more obvious as the duration of exposure to 34 °C became longer. Specifically, distinct Leydig cells were rarely detected after 2 weeks of exposure to 34 °C (Fig. 1a , upper panels).
Transfer of fish from 34 °C back to 29 °C resulted in the disappearance of the abnormal cell population and the appearance of interstitial cells in 7 days (Fig. 1a , lower panels). E3 spermatid appeared within 10 days of the transition to 29 °C, and spermatogenesis completely recovered to normal in 1 month. In zebrafish testes, undifferentiated type-A spermatogonia and differentiated type-B spermatogonia are present 14 . The type-A spermatogonia differentiates from E3 spermatid in 7 days 13 . There were no morphological abnormalities observed in type-A and type-B spermatogonia in testes treated at 34 °C temperature for 2 weeks (Supplementary Fig. 3b ). Therefore, we expected that the transition to 29 °C would induce differentiation to the E3 spermatid within 1 week. However, differentiation into E3 spermatid was observed only after 10 days of recovery, which suggests the involvement of factors other than autonomous differentiation of germ cells.
We analyzed the expression of several groups of genes in testes exposed to 34 °C. Among them, genes in the Heat shock protein ( Hsp ) group showed significantly upregulated expression within 24 h of temperature stimulation (Fig. 1b ). Hsps are genes that function as molecular chaperones and protect cells when exposed to stress conditions such as heat 15 . Most of the Leydig cell marker genes were significantly downregulated in the testes at 3 days of temperature stimulation. In contrast, the majority of marker genes for germ cells and Sertoli cells showed no change (Fig. 1b ).
Based on the loss of E3 spermatid at 34 °C (Fig. 1a ), we predicted a decrease in Odf3b , a spermatid marker 16 , which contrary to our expectation was not changed (Fig. 1b ). To investigate why Odf3b expression was not decreased at 34 °C, we analyzed the localization of Odf3b expression in testes. Odf3b was expressed in spermatocytes and early spermatids but not in E3 spermatid at 29 °C (Supplementary Fig. 5 ). In contrast, it was expressed in spermatocytes and abnormal cells at 34 °C (Supplementary Fig. 5 ). These results suggested that temperature stimulation led to differentiation of O df3b -expressing abnormal spermatid, which was consistent with histology (Fig. 1a ).
Since temperature stimulation strongly affected Leydig cells, they were further analyzed by in situ hybridization using its marker Insl3 17 . The region with positive signal decreased as the duration of 34 °C treatment became longer (Fig. 1c ), which was consistent with HE staining and the expression analysis (Fig. 1a and b ). Of note, the expression of Hspa5 , involved in protein folding and assembly in the endoplasmic reticulum (ER) 18 was induced by high temperature not only in Leydig cells but also in germ cells (Fig. 1d ). These results suggest that the ER stress response to temperature stimulation occurred in both Leydig cells and germ cells.
Suspecting that apoptosis is involved in the different responses between Leydig cells and germ cells, we analyzed the localization of its marker, cleaved caspase 3. The signal was detected in the interstitial cells (circumscribed in white lines) after 1-day exposure to 34 °C and in spermatocytes (in red lines) after 3 days (Fig. 1e ). These results indicated that apoptosis in response to 34 °C exposure is induced in Leydig cells prior to spermatocytes.
Temperature stimulation led to increased expression of Trpv4 in Leydig cells
To identify molecules involved in temperature sensing, we performed qRT-PCR for a group of Trp channel genes, sensor molecules for thermoreception 19 . Transient receptor potential (TRP) channels are expressed in sensory organs and play an important role in temperature sensitivity in zebrafish 20 , 21 . Expression of Trpv4, Trpm4b2 , and Trpm4b3 was significantly increased by temperature stimulation (Fig. 2a ). Among the three, the change was most evident for Trpv4 , which was upregulated specifically in interstitial cells as shown by in situ hybridization (Fig. 2b ). To further characterize these Trpv4 -expressing cells, we performed double in situ hybridization using Leydig cell marker genes Insl3 and Trpv4 . Remarkably, Trpv4 expression (blue) was limited to Insl3 -positive cells (brown) in testes treated at 34 °C (Fig. 2c ). These results show that temperature stimulation led to increased expression of Trpv4 specifically in Leydig cells.
In Trpv4 KOs, apoptosis of Leydig cells was not induced by high-temperature
To clarify the function of TRPV4, a Trpv4 mutant was generated using the CRISPR-Cas9 system (Supplementary Fig. 6 ). An 8 bp deletion in exon 2 resulted in a frameshift causing a change in amino acid sequence and a premature stop codon. Trpv4 −/− zebrafish were fertile and no abnormalities in spermatogenesis were observed under normal rearing conditions.
To analyze the effects of high-temperature treatment on testis structure and spermatogenesis in Trpv4 −/− , testis sections were observed with HE staining (Fig. 3a , Supplementary Fig. 7 ). After 2 weeks at 34 °C, Trpv4 −/− as well as Trpv4 +/+ showed a decrease in E3 spermatid (black lines) and the appearance of abnormal cells (red lines). In contrast, distinct effects were observed in stromal tissue: Trpv4 +/+ showed reduced interstitial tissue (arrows in Fig. 3a ), while in Trpv4 −/− it remained unaffected. Moreover, recovery of E3 spermatid was observed as early as 7 days after transferring the fish from 34 to 29 °C, earlier in Trpv4 −/− than in Trpv4 +/+ .
To examine further the effects of Trpv4 KO on Leydig cells, the expression of the Leydig cell marker genes was examined. Star, Insl3 , and Cyp11c1 were significantly decreased by treatment at 34 °C in Trpv4 +/+ , while they remained unchanged in Trpv4 −/− (Fig. 3b ). Expression of Cyp17a1 and Hsd3β was significantly decreased in Trpv4 −/− as well as Trpv4 +/+ at 34 °C. These results suggested that steroid hormone biosynthesis in Trpv4 −/− was affected to a lesser extent by temperature stimulation. Heterogeneity of Leydig cells in zebrafish , as was reported in adult mice 22 , may have affected the expression of marker genes in response to high-temperature stimulation. Noticeable difference was not observed for Hsp , germ cell markers or Sertoli cell markers (Supplementary Fig. 8 ). Correspondingly, Insl3 (+) area decreased as the duration of treatment at 34 °C increased in Trpv4 +/+ , while it remained unchanged in Trpv4 −/− (Fig. 3c ). Notably, the expression of Hspa5 showed no difference (Fig. 3d ). Strong signals of cleaved caspase3 were detected in the interstitial cells (white lines) and spermatocyte (red lines) after 3 days of 34 °C treatment in Trpv4 +/+ . In Trpv4 −/− , in contrast, cleaved caspase3 was restricted to the spermatocytes, and no clear signal in the interstitial cells was observed (Fig. 3e ). These results indicate that temperature stimulation failed to induce Leydig cell apoptosis in Trpv4 −/− .
High temperature impaired sperm motility via Trpv4 in Leydig cell
We analyzed the involvement of Trpv4 in the motility of mature spermatozoa by exposing adult males to 34 °C for 1 day. When the motility of sperm was analyzed under SI8000, which allows quantification of cell movement, a significant decrease in swimming velocity was observed in Trpv4 +/+ but not in Trpv4 −/− (Fig. 4 a, c and Supplementary Movies 1 – 4 , short movies showing representative cases for each genotype). The analysis of trajectories revealed that sperm from Trpv4 +/+ after 34 °C exposure followed small circular trajectories, traveling shorter distance, which was not observed in Trpv4 −/− (Fig. 4b ). Steroid hormone (17α,20β,21-trihydroxy-4-pregnen-3-one (20β-S)) secreted from fish testis work on receptors expressed in sperms and control their motility 23 – 26 . 20β-S at 100 nM resulted in increased motility and improved trajectory of spermatozoa from Trpv4 +/+ after 34 °C exposure (Fig. 4d–f and Supplementary Fig. 9 ).
Based on these results, we hypothesized that the decrease of 20β-S secreted from Leydig cells after 34 °C treatment impaired sperm motility. The secretion of 20β-S in the testis was analyzed by LC–MS/MS. Unfortunately, 20β-S in the testes was below the detection limit (data not shown). We therefore used qRT-PCR to examine the expression of the 20β-hsd by qRT-PCR, an enzyme involved in the synthesis of 20β-S 27 . In Trpv4 +/+ , the expression was significantly decreased in testes at 3 days of temperature stimulation, while no difference was observed in Trpv4 −/− (Fig. 4g ). The result was further confirmed by in situ hybridization. In Trpv4 +/+ , 20β-hsd was expressed in the interstitial cells at 29 °C, but its expression disappeared as the incubation at 34 °C lasted longer. In contrast, Trpv4 −/− showed clear expression in the interstitial cells even after temperature stimulation (Fig. 4h ). Based on these results, we propose that the reduction of 20β-S synthesis, caused by decreased expression of 20β-hsd in Leydig cells, is responsible for the decrease in sperm motility induced by high temperature.
Indeed, when sperm isolated from the testis was directly exposed to high temperature, its motility did not decrease even at 40 °C (Supplementary Fig. 10 ). Although the duration of exposure to high temperature was shorter than the incubation of the whole fish (Fig. 4 ) due to technical limitations of maintaining isolated sperm in vitro, these results were consistent with the hypothesis that the endocrine regulation of Leydig cells, not autonomous regulation of sperm, determined the sperm motility in response to high temperature.
Sperm matured in a high-temperature environment leads to abnormalities in offspring
We hypothesized that zebrafish actively inhibit sperm motility in high temperatures so that embryos containing damaged cells, in particular damaged gametes, will not be generated. To test these predictions, we used flow cytometry to analyzed the chromosome content of sperm after 1 day of exposure to 34 °C. Regions of small cell size and low PI content were defined as mature sperm fractions (Supplementary Fig. 11 ). Sperm after 1-day exposure to 34 °C displayed a wider range of DNA content than sperm at 29 °C (Fig. 5a and b ).
Males incubated at 34 °C for 1 day were mated with females normally reared and incubated at 29 °C. Fertilization rates were significantly lower for both WT and Trpv4 −/− incubated at 34 °C than at 29 °C. However, in the 34 °C group, WT had significantly lower fertilization rates than Trpv4 −/− (Fig. 5c ). We propose that the Trpv4 −/− , whose sperm velocity did not decrease even at 34 °C, had a decrease in fertilization rate smaller than that of WT (Supplementary Fig. 12 ). Among the fertilized eggs, there was no difference in survival up to 6 days post fertilization (dpf) (Fig. 5d ). However, higher developmental abnormalities were observed in offspring from Trpv4 +/+ and Trpv4 −/− males exposed to 34 °C (Fig. 5e, f ). Because a high frequency of developmental abnormalities occurs in offspring derived from aneuploid sperm in zebrafish 7 , 28 , these results suggested that some of the spermatozoa that matured in high-temperature environments have compromised genome stability. They also suggested that Trpv4 is not involved in the maintenance of sperm quality.
If fertilization occurs at high temperatures, generated embryos are also likely to develop in high temperatures. We therefore examined the effect of 34 °C water temperature on developing embryos of zebrafish . Embryos incubated at 34 °C resulted in high mortality rates and high developmental anomalies exceeding 70% at 1 dpf (Supplementary Fig. 13 ). It was clear that 34 °C is not a favorable temperature for normal embryonic development. However, it is of note that some embryos do develop normally and potentially grow up into fertile adults. These results indicated that zebrafish actively reduce sperm motility in order to suppress fertilization in high temperatures and thereby prevent the creation of embryos with compromised genome stability that can potentially be transmitted to offspring.
Temperature-dependent regulation of Leydig cells through Trp is species-specific
Finally, to examine whether this mechanism of Leydig cell sensitivity to high temperature is universally conserved among fish, we analyzed medaka ( Oryzias latipes ), which had an optimal temperature range distinct from that of zebrafish 29 , 30 .
Similar to zebrafish , high-temperature stimulation in medaka testes leads to abnormal spermatogenesis 31 . However, the optimal temperature for spermatogenesis was also different between the two species. Medaka required exposure to higher temperatures compared to zebrafish to show reduced sperm motility (Supplementary Fig. 14a and b ).
We performed qRT-PCR analysis of a group of Trp channels in the testes of medaka exposed to 39 °C. Hsp group was significantly upregulated in the temperature stimulation, but none of the genes in the Trp group was significantly upregulated (Supplementary Fig. 14c ). Cleaved caspase3 was widely observed in the Leydig cells and germ cells especially spermatocytes after 1 day of exposure to 39 °C (Supplementary Fig. 14d ). These results indicate that the induction of acute apoptosis in Leydig cells via Trpv4 by temperature stimulation is a phenomenon characteristic to zebrafish , and medaka may employ different molecules to reduce sperm motility in high temperature 31 . | Discussion
Multiple reports in mammals showed that germ cells are the main target of detrimental high temperatures in testes 32 . Our study using zebrafish revealed that high temperature induced apoptosis in Leydig cells prior to germ cells. Moreover, we showed that Leydig cells and germ cells in zebrafish have distinct mechanisms in sensing and responding to high temperatures. Trpv4 was involved in the Leydig cell-specific high-temperature sensitivity, which caused apoptosis, leading in turn to impaired sperm motility and reduced fertilization.
TRP channels were identified as sensor molecules for thermal reception, through which cellular responses to temperature changes are regulated 33 . Each TRP channel has a unique thermoreceptive range. Although TRP proteins are expressed also in the testes 34 , Trp in the zebrafish testis is poorly characterized. Trpv4 is strongly expressed in stressful environments 35 , 36 and activated above 27–35 °C 37 , which fits the high-temperature condition used in this study. Trpv4 activation by endogenous and exogenous stimuli increases Ca 2+ influx, resulting in an excess of intracellular free Ca 2+ 38 , and causes apoptosis in pathological conditions 35 , 36 , 39 . Trpv4 in zebrafish was studied by Amato et al. who showed the expression of this protein in sensory organs 21 . The lack of temperature sensing mechanism may also contribute to the phenotypes of Trpv4 −/− we observed (Fig. 3 ). Taken together, these characteristics of Trpv4 fit our hypothesis that high temperature led to upregulation of Trpv4 in Leydig cells and caused apoptosis.
In zebrafish , as in mice, high-temperature stimulation caused abnormal spermatogenesis (Fig. 1a ). Specifically, high-temperature treatment induced apoptosis in spermatocytes and abnormal differentiation from spermatocyte to spermatid, which was consistent with mice. Meiotic checkpoints cause spermatogenesis defects in mice following high-temperature stimulus and eliminate damaged spermatocytes 40 . A similar, sperm cell-autonomous mechanism seems to operate also in zebrafish . Hormone secretion from Leydig cells is unlikely to be essential for these cell-autonomous apoptoses because spermatogenesis failure was not distinguishable between Trpv4 +/+ and Trpv4 −/− (Fig. 3 , Supplementary Fig. 12 ). Indeed, knockouts of steroid synthase genes in zebrafish can produce mature spermatozoa 41 – 44 , which was consistent with our hypothesis. On the other hand, the effect of hormones secreted from Leydig cells on certain aspects of spermatogenesis was suggested from our data. First, the transition from 34 to 29 °C led to earlier E3 spermatid differentiation in Trpv4 −/− compared to Trpv4 +/+ . Second, expression of several genes including Insl3 did not change in Trpv4 −/− exposed to 34 °C. In zebrafish , Insl3 was reported to act as a germline survival factor 42 . In Trpv4 −/− , hormone secretion from Leydig cells was likely involved in promoting the rapid differentiation of E3 spermatid or survival of the remaining germ cells.
We propose that the Leydig cell-specific temperature-sensing mechanism comprises a system that suppresses fertilization in zebrafish under high-temperature conditions. In their natural habitat, zebrafish live in shallow tropical water with slow currents that are subject to frequent temperature fluctuations 45 , where water temperatures can rise to 38.6 °C 46 . High-temperature environments lead to severe abnormalities in early zebrafish development (Supplementary Fig. 13 ). Importantly, the maturation of spermatozoa in a high-temperature environment resulted in aneuploid spermatozoa (Fig. 5a ). In addition, a high-frequency of developmental abnormalities were observed in offspring, even when incubated at 29 °C after fertilization (Fig. 5e and f ). If gametes with compromised genomic integrity are transmitted to the next generation, the result will be detrimental to the species. Therefore, a mechanism to avoid fertilization in high-temperature environments will be advantageous.
Because the meiosis checkpoint controls the differentiation from metaphase to early spermatid, spermatozoa that have matured prior to temperature stimulation cannot be eliminated by the meiosis checkpoint. (Supplementary Figs. 4 and 5 ). Indeed, E3 spermatids were abundantly observed in the testis after 1-day incubation at 34 °C (Fig. 1a and Supplementary Fig. 3 ). Moreover, mature spermatozoa did not lose sperm motility in high temperatures (Supplementary Fig. 10 ). Therefore, mature sperm differentiated prior to temperature stimulation are expected to be motile with fertilization capabilities even at 34 °C. However, sperm motility and fertilization rate were significantly reduced by a 1-day exposure to 34 °C (Figs. 4 a, c and 5c ). Taken together, we propose that Trpv4 -mediated reduction of 20β-S synthesis is the mechanism responsible for inhibiting fertilization of mature sperm which has already passed the meiosis checkpoint prior to exposure to high temperature.
Neither upregulated expression of Trp family genes nor Leydig cell-specific apoptosis at high temperatures was observed in medaka . Temperature stimulation at 39 °C resulted in reduced sperm motility and induction of apoptosis, especially in spermatocytes (Supplementary Fig. 14 ), while molecular changes identified in zebrafish were not observed in medaka . These results suggest that factors other than Trp and meiotic checkpoints are involved in the apoptotic response to high temperature in medaka . We propose that response to high temperature in Leydig cells was established as a system to actively suppress fertilization in zebrafish , whose adaptive water temperature range matched the molecular characteristics of Trpv4 . | Exposure of testes to high-temperature environment results in defective spermatogenesis. Zebrafish exposed to high temperature exhibited apoptosis not only in germ cells but also in Leydig cells, as expected from studies using mice or salmon. However, the role of testicular somatic cells in spermatogenesis defects remains unclear. We found that in Leydig cells the Trpv4 gene encoding the temperature sensitive ion channel was specifically upregulated in high temperature. High temperature also reduced hormone synthesis in Leydig cells and led to a prompt downregulation of sperm motility. In the Trpv4 null mutant, neither Leydig cell-specific apoptosis nor decreased sperm motility was observed under high temperature. These results indicate that Leydig cell specific-apoptosis is induced via Trpv4 by high temperature. Notably this Trpv4 -dependent mechanism was specific to Leydig cells and did not operate in germ cells. Because sperm exposed to high temperature exhibited compromised genome stability, we propose that temperature sensing leading to apoptosis in Leydig cells evolved to actively suppress generation of offspring with unstable genome.
Under high temperature in zebrafish, the temperature sensitive ion channel Trpv4 is upregulated in Leydig cells, inducing apoptosis. Leydig cell-specific apoptosis decreases steroid hormone synthesis, impairing the motility of sperm with compromised genome integrity.
Subject terms | Supplementary information
| Supplementary information
The online version contains supplementary material available at 10.1038/s42003-023-05740-y.
Acknowledgements
We are grateful to Mrs. Natsuko Okuda at OMPU for her excellent care of zebrafish and medaka used in this study. We are grateful to medical students at OMPU, Mr. Akihiro Tani, Ms. Yuri Ozaki, Mr. Tomohiro Hirayama, Mr. Takeyoshi Murakami, Mr. Kei Yotsumoto and Mr. Kakeru Terada in particular, for their assistance in some of the experiments. We thank Dr. Toshiya Nishimura for critically reading the manuscript. This research was partially supported by Grant-in-Aid for Scientific Research (19K06460, Y.Y.).
Author contributions
Y.Y. carried out the experiment and analyzed the data. D.H. carried out the LC–MS/MS analysis. Y.Y. designed the study. F.O. supervised the study. Y.Y. and F.O. wrote the paper.
Peer review
Peer review information
Communications Biology thanks Juan (I) Fernandino and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Primary Handling Editors: Frank Avila and Tobias Goris. A peer review file is available.
Data availability
The video data are available from the authors upon request. The source data behind the figures can be found in Supplementary Data 1 .
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:56 | Commun Biol. 2024 Jan 13; 7:96 | oa_package/7d/96/PMC10787748.tar.gz |
|
PMC10787749 | 38218990 | Introduction
A considerable body of evidence suggests that disparities in SES—reflecting education, income, occupational prestige, and subjective status—play a critical role in shaping the health trajectories of people, with lower SES associated with elevated morbidity and mortality rates 1 – 6 . Many of these diseases—including, for example, asthma 7 , atopic dermatitis 8 , food allergies 9 , systemic lupus erythematosus 10 , and periodontal disease 11 —vary widely in their pathologies but share a common etiological pathway involving immune dysregulation, and they are more common in lower socioeconomic strata than among people with higher SES 12 . Three strands of evidence also document associations between SES and biomarkers of the immune system. First, many studies report associations between childhood SES and pro-inflammatory markers in circulating peripheral blood (such as interleukin-6 and C-reactive protein (CRP) 13 ) that, if chronically activated, presage a wide-range of diseases including, for example, diabetes type 2, some cancers, and cardiovascular disease. Second, studies have also examined white blood cell composition, finding that low SES is associated with increased development and circulation of pro-inflammatory immune cells (monocytes and neutrophils 14 ), whereas parental education is positively associated with the proportion of lymphocytes and negatively associated with the proportion of neutrophils and, among older adults, that SES is related to shifts in cell composition indicative of immunosenescence 15 – 18 . And finally, a limited number of studies have examined functional assays of immune response, sometimes ex-vivo, and show that, once again, childhood socioeconomic status is a risk factor for immune dysregulation, possibly more so among boys 19 – 22 . Nevertheless, despite the abundance of evidence connecting socioeconomic inequalities to immune-related diseases and biomarkers, the molecular etiology of SES-mediated immune alterations remains less explored.
A growing number of studies have examined SES and transcriptional patterns indicative of immune functioning. Research consistently shows people from low SES backgrounds have greater proinflammatory activity 14 , 22 – 26 . Additionally, SES is associated with the expression of genes regulated by the glucocorticoid receptor and interferon response factors, suggesting a suppression of adaptive immunity and innate antiviral immunity 22 , 27 . This signature pattern—involving the upregulation of proinflammatory genes and the downregulation of type I interferon innate antiviral response genes among the lower strata of status, called the “Conserved Transcriptional Response to Adversity” (CTRA)—has been observed in numerous populations with a range of research designs. Essentially, SES is associated with CTRA activation, which in turn is associated with the molecular underpinnings of immune-related and inflammatory diseases 22 , 28 – 30 .
The current study seeks to expand on these findings by providing a broader mapping of associations between SES and the molecular signaling pathways that regulate immunity. Understanding the impact of socioeconomic status on immune gene expression requires a systems-oriented approach that extends beyond the functional examination of individually differentially expressed genes and includes the network of upstream regulators. Furthermore, traditional transcription factor binding motif enrichment analysis suffers from the lack of tissue specificity. We aim to address these drawbacks in this paper by detailing upstream tissue-specific transcriptional factors and protein-protein interactors as a networked system, along with differentially regulated genes, that responds to SES. Such an approach offers a comprehensive understanding of how SES is associated with the molecular mechanisms that drive biological processes such as gene expression, cell signaling, and cell fate, which ultimately lead to disease in individuals of lower SES 31 , 32 . Significantly, we leverage the comprehensive resource of human tissue-specific gene regulatory network of Marbach et al. 33 to isolate transcription factors that could play a pivotal role in modulating the expression of SES—differentially expressed genes in whole blood. Additionally, by incorporating direct protein-protein interactors, we create a broad, inclusive and holistic set of genes and proteins that act as a networked system in disrupting essential biological processes. Our approach reveals the decisive role of transcription factors in driving SES—associated dysregulation that previously would be attributed to SES—differentially expressed genes.
Emerging evidence points to the intricate interplay between low socioeconomic position and high body mass index (BMI) 34 , 35 . Recent meta-analyses have consistently linked lowered socioeconomic status and elevated inflammatory biomarkers largely via obesity 36 , 37 . Although BMI is an imperfect indicator of obesity as it does not distinguish fat from fat-free mass, Liu et al. 38 observed a likely strong mediation of BMI in the negative relationship between childhood SES and adulthood inflammatory marker C reactive protein (CRP). Additionally, previous systematic reviews have shown that improved indicators of fat and obesity follow a similar pattern of health disparity to those seen with BMI 37 . Thus, we examine the potential mediation of BMI, as an indicator of obesity, along with other common social and behavioral mediators of SES, and the entire system of differentially expressed genes, upstream transcription factors and protein-protein interactors that drive immune dysregulation as a result of lowered SES.
We focus on American adults in their late 1930s, who are ostensibly healthy but nevertheless at-risk for later health challenges. We leverage the mRNA data from 4543 early adults participating in the National Longitudinal Study of Adolescent Health (Add Health) 39 . First, we identify cell functional pathways and their directionality in SES-related dysregulation of the immune system. To this end, we capitalize on publicly available pathway ontologies to functionally annotate genes that show changes in expression and that cluster together. Second, we identify upstream modulators and regulators of the differentially expressed genes to provide a systems perspective on SES and immunity. Such a view also isolates potential targets for remediation. Finally, we consider the behavioral and health-related factors that may explain associations between SES and the immune cell transcriptome. Results reveal that SES is associated with widespread dysregulation of immunity involving intricately interrelated differentially expressed genes, transcription factors, and protein-protein regulators. Furthermore, BMI is a likely, potent mechanism driving these patterns. | Methods and materials
Add health and differential gene expression
The National Longitudinal Study of Adolescent to Adult Health (Add Health) is a representative study of adolescents in the Unites States who were followed into adulthood over five waves of data collection 39 . Study participants provided informed written consent with respect to all aspects of the Add Health study in accordance with the University of North Carolina School of Public Health Institution Review Board (IRB). Transcriptomic profiles of the consenting participants were collected during Wave V of the Add Health Study (2016–2017) via an intravenous blood draw (age of subjects range from 33 to 43 years). The access to restricted use Add Health transcriptomic data was obtained by completing a contractual and data use agreement. Additional detailed information on the study design, interview procedures, consent procedures, demographic assessments, collection, sequencing and quality control of the blood sample, and derivation of the analytical samples is reported in Supplementary Methods and in previous studies 40 – 42 . Furthermore, the data analysis and all methods presented in this work were carried out in accordance with the relevant ethical guidelines and regulations. We draw on the mRNA-seq data of 4015 subjects with complete information on the models’ variables. Socioeconomic status composite scores were calculated using the sum of standardized indicators of education, income, occupation, and subjective socioeconomic status of the early adult subjects 42 – 44 .
Genes with low counts were excluded from the analysis (see Supplementary Methods ). After normalizing the raw mRNA-seq counts using a weighted trimmed mean of log expression ratios (TMM normalization) 45 using the edgeR 46 package in R, we analyzed genes whose expression varied significantly by the early adulthood socioeconomic composite score using a linear model analysis 47 , 48 . We controlled for covariates that could influence mRNA abundance levels: sex, self-described race, age, pregnancy status, sample analysis plate, number of hours fasting prior to blood sample collection, use of anti-inflammatory medication (e.g., NSAIDS, COX-2 inhibitors, inhaled corticosteroids), instances of common subclinical symptoms (e.g., colds, flu), and common infectious or inflammatory diseases (e.g., infection, allergies) in the 4 weeks prior to blood sample collection. We also corrected for batch effects using the ComBat function in the sva package 49 in R.
Our overall analytic strategy is to (1) estimate clusters of genes across the whole genome and, within these clusters, identify genes that differentially expressed (DE) by SES (hereafter, SES–DEG); (2) characterize the biological function of these DE genes and gene clusters that are likely to have DE genes; (3) identify transcription factors and their protein neighbors that are associated with these DE genes and act as a networked system; (4) determine the relative functional relevance of SES–DEG and upstream regulators in manifesting immune dysregulation, and, finally, (5) identify behavioral mediators that may account for associations between SES and DE genes and their upstream regulators.
Whole-genome clusters and cluster-SES relationship
Processed gene expression data from 14,251 transcripts in 4015 individuals were subject to unsupervised clustering using Weighted Gene Coexpression Network Analysis (WGCNA) 50 . We identified a total of 19 clusters and the number of genes in each cluster and the clusters’ overlaps with the SES–DEG are shown in Supplementary Fig. S1 . To identify the clusters that have a significant relationship to SES, we modelled the cluster eigengenes (a summarized expression vector of each cluster) as a linear function of SES as in the differential expression analysis. Additionally, we performed a Fisher exact test to identify clusters that show an enrichment for SES–DEG. Together, the two tests resulted in clusters that, (1) have a significant cluster-SES relationship and (2) are enriched for SES—up or downregulated genes (see Supplementary Fig. S2 ). Four clusters (Cluster 7, 11, 13 and 17) had eigengenes that are significantly differentially expressed by SES. Of the 4 clusters, Cluster 11 showed an overrepresentation for SES—downregulated genes, while Clusters 7, 13 and 17 showed overrepresentation of by SES—upregulated genes. In this context, upregulation refers to a positive association between SES and mRNA abundance levels.
Functional enrichment analysis of the differentially expressed genes and significant clusters
Functional enrichment analysis for the SES–DEG (see Supplementary Fig. S3 and Supplementary Dataset S1 ) and WGCNA identified cluster genes (see Supplementary Fig. S4 and Supplementary Dataset S2 ) was performed using R Bioconductor package ReactomePA 51 to identify the biological function of the genes (FDR p < 0.05). The Reactome results are organized in a hierarchical structure of biological pathways with each biological pathway being a node that shows parent–child relationships 52 . We relied on this parent–child relational database to pool together multiple pathways under the same parent node in order to better understand the large-scale changes (up to 3 hierarchical levels). The significance of the parent node was determined by its most significant child.
Functional enrichment analysis of the SES–DEG and the upstream regulators was performed using the ClueGO 53 plugin in Cytoscape 54 . This plugin allows for the combined analysis of multiple gene lists using a preselected ontology. We analyzed the SES–DEG (up- and downregulated gene lists) along with their upstream regulators (transcription factors and protein neighbors) with the Reactome ontology.
Identifying key controllers of genes exhibiting differential expression by SES
Upstream regulators of the SES–DEG (Set A; see Supplementary Fig. S5 ) were categorized into (1) transcription factors that are themselves differentially expressed (Set B; see Supplementary Fig. S5 ), (2) protein neighbors of the differentially expressed transcription factors (Set C; see Supplementary Fig. S5 ), and (3) transcription factors that putatively modulate the expression of differentially expressed genes (Set D; see Supplementary Fig. S5 ). Marbach et al. 33 constructed tissue-specific regulatory networks that linked transcription factors and genes with a score based on a curated collection of sequence binding motifs. Those transcription factors that had a medium or greater confidence (> 0.4) of modulating the expression of the differentially expressed genes in blood tissue were included in the set of upstream regulators (Sets B and D). A total of 643 transcription factors were identified in the blood tissue-specific gene regulatory network. 304 transcription factors had gene interactions with at least a medium confidence score. Protein neighbors of differentially regulated transcription factors were obtained using the STRING database 55 . Each protein-protein interaction (PPI) in STRING is annotated with a score that indicates the confidence of the interaction. Only neighbors with scores of at least high confidence (> 0.7) were included in the set of upstream regulators (Set C). Thus, Set A represents the DE genes and Sets B, C and D together constitute their upstream regulators. The 304 tissue-specific transcription factors had interactions with 8543 unique protein-protein neighbors. 1750 neighbors exceeded the confidence threshold for PPI.
Possible mediators of SES and DE genes and upstream regulators
We examined behavioral and psychobiological process that might mediate associations between Wave V early adulthood SES and the expression of the genes and upstream regulators using a counterfactual mediational framework 56 . The mediators included Body Mass Index (BMI), perceived stress (based on Cohen’s Perceived Stress Scale 57 ), current self-reported smoking status, consumption of alcoholic drinks (days drank over past 30 days; categorized as 0 drinks, 1–2 drinks, 3–5 drinks, and more than 5 drinks per occasion), financial stress (self-reported difficulty in paying bills), and access to health insurance. We also compared the mediation of BMI with the mediation observed for waist circumference, which has been reported to be a more accurate measure of fatness 58 .
Randomization test of differentially expressed genes and upstream regulators
We quantified the statistical significance of the observed results by performing randomization tests based on 1000 randomly generated sets of differentially expressed genes. Random samples were drawn from the entire genome to obtain a set of genes equal in number to Set A (see Supplementary Fig. S5 ). Sets B, C, and D were derived from every randomly generated Set A using the same procedure used with the SES–DEG. We then computed the significance (empirical p -value) of every actual gene in Set A (DE genes) and Sets B, C, and D (upstream regulators) 59 by comparing them to their respective sets from the 1000 randomly generated sets. We obtained p -values for each of the sets of genes by combining the p -values of every gene in the set using Fisher’s method. | Results
Transcriptional alterations with SES are characterized by organism wide dysregulation
We performed a differential gene expression analysis followed by an enrichment analysis of the resulting SES–DEG (see Supplementary Dataset S2 and Supplementary Fig. S3 ). Upregulated genes indicate a significant association between high SES and high expression (i.e., a positive association). Functional enrichment of the SES–DEG (423 upregulated genes and 389 downregulated genes) showed a majority upregulated for pathways involving metabolism, signal transduction and cellular response to stress by a core of ribosomal and translational genes. Interestingly, these cytosolic ribosomal genes ( RPL - and RPS -genes) were found to be downregulated with aging in an analysis of the human peripheral blood and previously linked with SES 42 . Indeed, a combined WGCNA and SES—differential expression analysis (see Fig. 1 ) showed a tight clustering of the ribosomal and transcriptional activity genes (Cluster 11 in Fig. 1 ) that are responsible for the SES-upregulated pathways. One cluster of SES-DEGs (Cluster 7) displayed dysregulation in immune system and response, hemostasis and cell death that were predominantly driven by downregulated genes, while another cluster (Cluster 11), largely comprising upregulated ribosomal genes, affected transcriptional events in several cellular functions. Cluster 13 consisted of genes involved in cell division and cell cycle control dysregulating a relatively small number of pathways in signal transduction and immune system, while Cluster 17 comprised too few genes for a meaningful enrichment interpretation.
An inspection of enriched pathways reveals that SES-upregulated pathways include interferon innate immune response and neutrophil degranulation (see Fig. 2 ). Curiously, type II interferon (IFN-γ) signaling is also upregulated. Despite the lack of direct evidence for their involvement, the genes that are tied to the upregulation of type I and type II interferon signaling do share HLA -genes that regulate the antiviral immune response, which may explain the overrepresentation of both classes of immunity. Figure 2 also shows an attenuation of proinflammatory pathways with higher SES (i.e., upregulation of proinflammatory pathways with low SES) via pathways in the proinflammatory nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) and proinflammatory toll-like receptors (TLR).
Upstream regulators structure immune dysregulation with socioeconomic disparities
The biological pathways and molecular mechanisms associated with socioeconomic disparity were also examined via an analysis of upstream regulators (transcriptional factors and protein partners) of the DE genes. A combined functional enrichment analysis of the resulting upstream regulators (239 upstream regulators of SES—upregulated genes and 87 upstream regulators of SES—downregulated genes) and SES–DEG (see Fig. 3 and Supplementary Dataset S3 ) indicates a significant role of the upstream regulators in structuring the overrepresented pathways in the immune system. The upstream regulators include tissue-specific transcription factors that are (1) differentially expressed (Set B, see Supplementary Fig. S5 and Supplementary Table S1 ), (2) potentially regulating the expression of a differentially expressed gene (Set D, see Supplementary Fig. S5 and Supplementary Table S1 ), and (3) PPI neighbors of differentially expressed transcription factors (Set C, see Supplementary Fig. S5 and Supplementary Table S1 ). Figure 3 indicates that the immune pathways that are enriched have a larger proportion of upstream targets. Importantly, many of these pathways were also enriched by SES–DEG signifying that the patterns of immune dysregulation observed in Fig. 3 do not reflect the inclusion of upstream regulators per se, but rather reflect the significant mechanistic role played by these transcription factors and protein partners. Furthermore, the immune dysregulation observed in functional enrichment analysis of the upstream targets without the inclusion of SES–DEG (see Supplementary Fig. S6 ) mirror those enriched by SES–DEG alone and SES–DEG with upstream targets, suggesting that SES–DEG are not wholly responsible for SES-associated immune dysregulation.
Figure 4 shows the upstream regulators along with the SES–DEG represented as layers (4 in total, where the innermost circle of genes and upstream regulators is labelled “Layer 1”, sequentially to the outermost group of genes and regulators labelled “Layer 4”) based on their interaction scores derived from the STRING database and the number of times each gene or upstream regulator is involved in functional immune pathways that are enriched in Fig. 3 . The genes that are responsible for the enrichment of each immune pathway were identified. The innermost layer in Fig. 4 depicts genes and upstream regulators that are involved in more than ten functional pathways, whereas the outermost layer consists of genes and upstream regulators that are only involved in one enriched biological process. The most essential regulators involved in the dysregulation of the immune system related to SES disparities thus occupy the center of the diagram. Significantly, variations in early adult SES are prominently linked to alterations in cytokine signaling in the immune system involving interleukin and interferon gamma signaling, Toll-like receptor signaling cascade, and TNF pathways (Fig. 3 and innermost layer in Fig. 4 ).
Proteins most deeply linked to SES inequalities invariably revolve around the cyclic 3ʹ–5ʹ adenosine monophosphate response element-binding protein (CREB) and NF-κB pathway signaling. Variations in the activity of CREB and NF-κB selectively upregulate the transcription of interferon response factor family while simultaneously inhibiting the activity of proinflammatory interleukins in subjects with high SES. Genes and transcription factors (via the upstream analysis) that are central to the functional response to SES disparities in functional gene regulation (shown in Fig. 3 ) are also shown in Fig. 4 . Molecules are placed in layers depending on their contribution (instances of enrichment of a pathway) to the functional immune enrichment. The inner most layer consists of transcriptional factors such as CREB1 , TP53 , RELA , REL , BTRC , BTK and the MAPK- family. CREB and REL proteins, among other important functions, play a crucial role in the activation of the fight-or-flight signaling pathways that is directly responsible in eliciting the CTRA gene expression profiles. Although a large fraction of the proteins is derived from the set of upregulated genes, it is noteworthy that these regulators can have far reaching impact and they are not always in the expected direction. This is particularly true for the proinflammatory toll-like receptors (TLR) (see Fig. 3 ) pathways, which have a larger proportion of downregulated genes than upregulated genes. However, they also functionally interact with the upstream regulators connected to upregulated genes. The greater influence of the upstream regulators connected to upregulated genes suggests that the downregulation of certain genes could transpire as a consequence of inhibitory activity of the transcription factors.
Social/behavioral mediators of SES-transcriptome associations
Figure 5 reports the median percentage mediated ratio for key SES-related social/behavioral processes in every layer of SES-related gene regulation. Intriguingly, the inner most group of genes that are most centrally implicated in SES-associated dysregulation, are also mediated the least (lowest median percentage mediated ratio) by every behavioral risk factor. However, this could reflect the fact that the inner most group of genes are not themselves differentially expressed despite potentially inducing larger mediated changes in the outer layers of genes. BMI presented the strongest explanation of the association between the transcriptional response of the gene groups and SES, followed by smoking tobacco (also see Supplementary Figs. S7 and S8 ). No significant mediation was observed for financial stress or access to health insurance. Furthermore, we observed no significant differences between BMI and waist circumference in mediating the SES—associated immune transcriptome (see Supplementary Fig. S9 ). | Discussions
The present analyses expand the scope of prior studies of SES-related alteration in transcriptomic profiles of human immune response genes by identifying new genomic functional impacts (e.g., ribosomal biology) and new features of the gene regulatory architecture of SES (e.g., TP53 , BTRC , BTK and MAPK transcriptional control pathways). Consistent with prior research, we find that SES is negatively associated with pro-inflammatory pathways in the nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) and proinflammatory toll-like receptors (TLR). Our findings also link high SES to elevated type II interferon (IFN-γ) signaling and identify a related upregulation of HLA - genes, which may underpin both type I and type II interferon effects.
The analyses extend previous research by mapping networks of upstream blood-specific transcriptional factors and protein interactions that could play a vital role in structuring the observed transcriptional landscape. The complexity of the impact of socioeconomic inequalities on the immune system and its association with diseases with widely varying pathologies warrants a systems-oriented approach to comprehensively analyze the SES-related perturbations in the immune transcriptome. To our knowledge, most prior research on SES focuses solely on transcriptomic alterations with a particular focus on pro-inflammatory action 30 . Here, we include the upstream regulators to depict an enhanced view of dysregulation with SES, shedding light on a tightly knit group of transcription factors that play a central role in modulating the transcriptomic alterations.
Our findings map a central network of upstream regulators that vary as a function of early adult SES (central positions in Fig. 4 ). Given the lack of change in the expression of the genes that encode these transcriptional factors, receptor-mediated post-transcriptional modification of these transcriptional factors (e.g., receptor-mediated phosphorylation of CREB1 , TP53 , RELA , REL , BTRC , BTK and MAPK ) might modulate the expression of their downstream targets.
Despite the congruence between our results and the CTRA model in terms of dysregulated pathways, the findings show that SES disparities in early adulthood, interestingly, do not alter the same set of genes identified in previous studies of CTRA. For example, the attenuated interferon innate responses in the CTRA model is possible through the direct suppression of INFA and INFB . Here, we observed upregulated HLA- genes that additionally regulate the antiviral innate response. These findings call for a for further study in SES-modulated genes in the immune system beyond the signaling pathways already implicated in structuring the conserved transcriptional response to adversity (CTRA) RNA profile 60 .
The observed SES-related transcriptional perturbations are associated with both up- and downregulated functional pathways in the immune system. Social stress-induced gene alterations in humans are associated with diseases that include both upregulation of the immune system as well as suppressed immune responsiveness with lowered social status. This seemingly perplexing pattern has been explained by the selective characteristics of the immune transcriptome, i.e., the increase in expression of certain pro-inflammatory genes and the repression of groups of antiviral immune response genes 40 , 61 – 63 . It is, therefore, unsurprising that a similar pattern of dysregulated immune pathways is emergent in the blood transcriptomic landscape of subjects in Add Health with contributions of enrichment from both up- and downregulated gene clusters. Such a finding calls for additional research that moves beyond the initial general finding that stressors upregulate proinflammatory genes and downregulate antiviral genes.
Lastly, among the common mediating (or possible explanatory) mechanisms studied here, BMI consistently emerged as a plausible mediator of the SES associations with immune cell gene regulation. Smoking also appears to play a significant role in the SES-related transcriptional alterations. These results underscore the importance of gene regulatory network approach in formulating a comprehensive understanding of psychosocial stressors and their impact on biological mechanisms early in life. Studies have already established that changes caused by socioeconomic disparities in early adulthood could have far-reaching implications for chronic conditions in later adulthood. Identification of novel regulators of such perturbations is an important step in formulating a mitigating strategy.
We used tissue-specific regulatory networks to link transcription factors to differentially expressed genes, and subsequently the STRING database to find protein interaction partners. There were 643 transcription factors identified in the gene regulatory network, with an even a smaller number (304) having a confidence score that is above the threshold used (0.4). High confidence (threshold > 0.7) protein-protein neighbors derived from the whole network of 643 transcription factors numbered 3051 (18,875 total protein-protein neighbors), of which 1750 were pruned for the 304 transcription factors. Given the central role of many of the transcription factors and protein partners, one could argue that the set of upstream regulators found from the SES–DEG could be equal to a set of upstream regulators derived from a random set of genes. To examine this possibility, we performed randomized trials by starting with randomly selected sets of “DE” genes to then derive these upstream regulators of the random sets. We subsequently compared the upstream regulators of each random set of “DE” genes to our observed results (see Supplementary Figs. S10 and S11 ). While some of the individual transcription factors may not reach statistical significance (Supplementary Fig. S10 ), the entire set of the upstream regulators is highly significant (Supplementary Fig. S11 ). | Disparities in socio-economic status (SES) predict many immune system-related diseases, and previous research documents relationships between SES and the immune cell transcriptome. Drawing on a bioinformatically-informed network approach, we situate these findings in a broader molecular framework by examining the upstream regulators of SES-associated transcriptional alterations. Data come from the National Longitudinal Study of Adolescent to Adult Health (Add Health), a nationally representative sample of 4543 adults in the United States. Results reveal a network—of differentially expressed genes, transcription factors, and protein neighbors of transcription factors—that shows widespread SES-related dysregulation of the immune system. Mediational models suggest that body mass index (BMI) plays a key role in accounting for many of these associations. Overall, the results reveal the central role of upstream regulators in socioeconomic differences in the molecular basis of immunity, which propagate to increase risk of chronic health conditions in later-life.
Subject terms | Limitations
Several limitations are noteworthy. First, the hypotheses and the subsequent results are driven by the mRNA abundance data collected once from every participating subject. The repeated collection of transcriptomic data would be essential to address their highly transient nature, which is likely associated with considerable noise. Second, the identification of upstream transcriptional regulators of gene expression is performed with the aid of tissue-specific gene regulatory networks. These networks link genes to transcription factors based on experimental evidence and assign a confidence score to every identified transcription factor. It is, therefore, possible that transcription factors that play a central role in cell maintenance and cell cycle may be implicated without having a substantive role in the etiology or progression of dysfunction. We tried to account for such effects using a randomization experiment. However, direct measurements of protein abundance are required to concretely determine the role of every transcription factor. In the absence of proteomic assay data, inferring transcription factor abundance from publicly available chromatin immunoprecipitation followed by sequencing (ChIP-seq) data sources that have similar gene expression alterations as those observed with lowered SES, could offer valuable insights and presents an important extension of the current work. Finally, because the design is not experimental, the findings cannot be interpreted as casual relationships. This limitation is especially salient for the casual identification of social and behavioral mediators of SES-related immune dysregulation. Future research could usefully examine them and other mediators in a casual framework to disentangle expected tissue repair response to stress induced by these risk factors (e.g., obesity, smoking) and global immune system dysfunction.
Nevertheless, results suggest that a network of transcription factors and protein partners play a pivotal role in modulating the SES-related transcriptional response that precipitates dysregulated immune system response in terms of inflammation and interferon innate immunity. These central actors are important targets for future research connecting health disparities and socioeconomic inequalities. The results highlight the need for system-oriented analyses to comprehensively map the biological impact of SES disparities and they represent an essential step forward in identifying targets for possible prevention and intervention.
Supplementary Information
| Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-024-51517-6.
Author contributions
S.R. conceived of paper, performed data analysis, and wrote paper. S.C., K.H., and M.S. collected the data. S.C., B.L., and M.S. assisted with interpretation and writing.
Funding
This research was supported by NIH Grants R01-HD087061 (MPIs K.H. Harris and M. J. Shanahan) specifically for the present analyses, as well as P30-AG017265, R01-AG043404, and R01-AG033590; by the Swiss National Science Foundation (10531C-197964, Shanahan PI); and by the Jacobs Center for Productive Youth Development (University of Zürich). This research uses data from Add Health, a program directed by Robert Hummer and designed by J. Richard Udry, Peter S. Bearman, and Kathleen Mullan Harris (University of North Carolina at Chapel Hill). The Add Health program is funded by Grant P01-HD31921 from the Eunice Kennedy Shriver National Institute of Child Health and Human Development, with cooperative funding from 23 other federal agencies and foundations ( https://www.cpc.unc.edu/projects/addhealth/about/funders ).
Data availability
Add Health data are available at https://www.cpc.unc.edu/projects/addhealth/documentation/ . All the data used in these analyses, except for the transcriptomic data are not restricted. The mRNA-seq data is available via a restricted data contract. Additional information and application for the restricted-use data can be accessed through the Carolina Population Center (CPC) data portal at https://data.cpc.unc.edu/projects/2/view . The Cytoscape sessions, supplemental datasets and R codes used in these analyses are available at https://github.com/socialgnome/Immune-SES .
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:56 | Sci Rep. 2024 Jan 13; 14:1255 | oa_package/7e/bc/PMC10787749.tar.gz |
|
PMC10787750 | 38218941 | Introduction
Atherosclerosis is a chronic, progressive disease characterized by the accumulation of lipids, inflammatory cells, and fibrous elements in the arterial wall, leading to the formation of atherosclerotic plaques [ 1 , 2 ]. These plaques can lead to serious clinical consequences, such as myocardial infarction (MI) and stroke, which are major causes of morbidity and mortality worldwide [ 3 ]. The pathogenesis of atherosclerosis involves multiple genetic, environmental, and metabolic factors [ 4 ].
Copper is an essential trace element involved in mitochondrial respiration, antioxidant defense, and neurotransmitter synthesis [ 5 , 6 ]. Copper homeostasis is tightly regulated. Copper excess or deficiency can lead to pathological alterations, and thus negatively affect human health [ 7 ]. Copper dyshomeostasis may contribute to the pathogenesis of atherosclerosis by increasing oxidative stress, inflammation, endothelial dysfunction, and lipid metabolism levels [ 8 – 10 ]. Moreover, a novel type of cell death, which is copper-dependent, has recently been described. This copper-induced cell death, termed cuproptosis, may contribute to atherosclerosis development by causing cell death and impairing mitochondrial function [ 11 , 12 ].
In this review, we explore the role of copper homeostasis and cuproptosis in atherosclerosis. We also discuss potential therapeutic strategies against cuproptosis and provide directions for future research on cuproptosis and atherosclerosis. | Conclusions and future perspectives
In conclusion, the multifaceted role of copper in atherosclerosis involves complex cellular and systemic metabolic processes. Copper deficiency or excess promotes atherosclerosis by inducing oxidative stress, inflammation, endothelial dysfunction, and adverse effects on lipid metabolism. Further research is required to elucidate the interactions between these factors and their role in the development of atherosclerotic lesions.
Copper-induced cell death and the subsequent emergence of the concept of cuproptosis have expanded our understanding of the role of copper in atherosclerosis. The involvement of cuproptosis and the associated mitochondrial dysfunction in atherosclerosis highlights the need for further research into the molecular mechanisms underlying these processes. Therapeutic targets and strategies for mitigating the detrimental effects of copper-induced cell death in atherosclerosis have also been identified, including modulating copper levels, inhibiting cuproptosis, or enhancing cellular defenses against cuproptosis. However, potential side effects should also be considered since an excessive reduction in copper levels may impair essential physiological processes or, in itself, lead to atherosclerosis. Additionally, the absence of cuproptosis-specific biomarkers is a significant barrier that hinders further development of clinical applications regarding cuproptosis. Identifying such biomarkers may facilitate risk stratification and the development of personalized therapeutic approaches. | Copper is an essential micronutrient that plays a pivotal role in numerous physiological processes in virtually all cell types. Nevertheless, the dysregulation of copper homeostasis, whether towards excess or deficiency, can lead to pathological alterations, such as atherosclerosis. With the advent of the concept of copper-induced cell death, termed cuproptosis, researchers have increasingly focused on the potential role of copper dyshomeostasis in atherosclerosis. In this review, we provide a broad overview of cellular and systemic copper metabolism. We then summarize the evidence linking copper dyshomeostasis to atherosclerosis and elucidate the potential mechanisms underlying atherosclerosis development in terms of both copper excess and copper deficiency. Furthermore, we discuss the evidence for and mechanisms of cuproptosis, discuss its interactions with other modes of cell death, and highlight the role of cuproptosis-related mitochondrial dysfunction in atherosclerosis. Finally, we explore the therapeutic strategy of targeting this novel form of cell death, aiming to provide some insights for the management of atherosclerosis.
Subject terms | Facts
Copper is an essential trace element required in various physiological processes in the human body. Dysregulation of copper homeostasis, whether towards excess or deficiency, has been implicated in various health problems, including atherosclerosis. Dysregulation of copper homeostasis and copper-induced cell death (cuproptosis) are acknowledged as potential contributors to the pathogenesis of atherosclerosis.
Open questions
What is the safe window for copper levels to avoid the development of atherosclerosis? How can the suitable copper concentration be determined for the treatment of atherosclerosis? What are the potential candidate biomarkers that can reliably and sensitively indicate the occurrence of cuproptosis in the context of atherosclerosis? What are the potential undiscovered roles of copper in mitochondrial function, and is there interplay between copper and mitochondrial dynamics or mitophagy?
Copper metabolism
Systemic copper homeostasis
Copper is an essential trace element required in various physiological processes in the human body. Research shows that Cu levels vary across organs and tissues, with values ranging from 3 mg (kidneys) to 46 mg (bone) [ 13 ]. Copper serves as a cofactor for numerous enzymes involved in energy metabolism, neurotransmitter synthesis, and antioxidant defense [ 5 , 6 , 14 ]. Excessive or deficient copper levels can, however, lead to cytotoxicity and pathological alterations [ 7 ]. Therefore, it is essential to maintain systemic copper levels within a narrow range ( Fig. 1 ) .
Copper absorption
Copper is mainly obtained from dietary sources such as organ meat and shellfish [ 15 ]. Copper is primarily absorbed by enterocytes in the small intestine, and its uptake is mediated by copper transporter 1 (CTR1), also known as solute carrier family 31 member 1 (SLC31A1) [ 16 ]. CTR1 appears to play a primary role in this uptake, as indicated by research showing greatly reduced copper accumulation in the peripheral tissues of neonatal mice lacking CTR1 [ 17 ]. The six-transmembrane epithelial antigen of the prostate (STEAP) facilitates this process by reducing divalent Cu 2+ to Cu + .
Copper storage and transportation
After uptake by the intestinal epithelial cells, copper is transported and exported into the bloodstream by the ATPase copper transporter 7 A (ATP7A) [ 18 ]. Here, copper is transported through the portal system to the liver by binding to soluble chaperones such as human serum albumin (HSA), ceruloplasmin, histidine, and macroglobulin [ 19 , 20 ]. Copper uptake by the hepatocytes in the liver is also mediated by CTR1. The liver is crucial in regulating copper metabolism by storing and excreting copper. Copper storage is mediated by the copper-binding protein metallothionein (MT), a reducing molecule rich in thiol groups with a high affinity for copper ions [ 21 ]. MT plays a crucial role in copper homeostasis by storing and releasing excess copper when required. Copper transport is performed by ATPase copper transporter 7B (ATP7B) in hepatocytes, which pumps copper ions from the liver back into the bloodstream. Here, copper ions bind to their soluble partners and are transported to specific tissues and organs [ 22 ].
Copper elimination
Copper is primarily eliminated via biliary excretion [ 7 ]. Excess copper is secreted by the liver into the bile and then excreted through feces [ 23 ]. The ATP7B transporter regulates the elimination of copper from the liver to bile canaliculi [ 24 ]. In cases of ATP7B dysfunction, such as Wilson’s disease, copper accumulates in the liver, leading to liver damage and subsequent health issues [ 25 ].
In summary, copper metabolism is a complex process that involves multiple mechanisms to regulate copper absorption, transport, storage, and elimination.
Cellular copper homeostasis
The maintenance of intracellular copper homeostasis is a complex and tightly regulated process involving the coordinated action of copper transporters, chaperones, and cuproenzymes ( Fig. 2 ) . Copper concentration is maintained within a narrow range through the collaborative action of these copper-dependent proteins.
Copper uptake
In mammalian cells, copper uptake is primarily mediated by CTR1 [ 6 ], a transmembrane protein that forms a trimeric channel to facilitate the passage of Cu ions across the plasma membrane [ 26 ]. CTR1 expression is regulated by copper levels. Low copper levels upregulate CTR1 expression to increase copper uptake, whereas high levels downregulate CTR1 expression to prevent copper cytotoxicity [ 27 , 28 ]. These findings highlight the importance of CTR1 in copper homeostasis.
Intracellular copper distribution
After entering the cell, copper is first delivered to cytoplasmic copper chaperones, which then transport it to intracellular compartments such as the mitochondria, trans-Golgi network (TGN), and nucleus. In mammalian cells, three major copper chaperones have been identified: antioxidant 1 copper chaperone (ATOX1), copper chaperone for superoxide dismutase (CCS), and cytochrome c oxidase copper chaperone 17 (COX17) [ 29 , 30 ].
ATOX1 delivers copper to ATP7A and ATP7B in the TGN [ 31 ]. Additionally, ATOX1 has been shown to function as a new type of copper-dependent transcription factor that mediates copper-induced cell proliferation [ 32 ].
CCS is a soluble cytoplasmic copper-chaperone protein that transfers copper ions to the copper-binding site of superoxide dismutase 1 (SOD1) [ 33 ]. SOD1 is a major antioxidant enzyme that catalyzes the conversion of superoxide radicals into oxygen and hydrogen peroxide, thereby maintaining reactive oxygen species (ROS) homeostasis and protecting the cells from oxidative stress damage [ 34 ]. This function has been shown in SOD1-knockout mice, where the absence of SOD1 increased oxidative stress and led first to liver cell damage and eventually liver cancer [ 35 ].
COX17 is responsible for delivering copper ions to the assembly of cytochrome c oxidase (COX), a key component of the electron transport chain involved in cellular respiration [ 36 ]. Subsequently, COX11 and SCO1, which are also important components of the COX assembly, donate copper to Cu(B) and Cu(A) sites in COX2 and COX1 core subunits of COX in the mitochondrial inner membrane, respectively. Additionally, COX17 also acts as a copper donor within the mitochondrial intermembrane space (IMS) [ 37 ]. COX17 is thus essential for proper COX assembly, with mutations in COX17 shown to further reduce COX activity, resulting in mitochondrial dysfunction and oxidative stress [ 38 ].
Intracellular copper storage and export
Intracellular copper export is mainly mediated by ATP7A and ATP7B, which are copper transporters located in the TGN that regulate copper delivery to secretory pathways and the plasma membrane [ 39 ]. Under normal conditions, ATP7A and ATP7B transport copper ions from the TGN to other cellular compartments for various cellular functions. At excessive intracellular copper levels, ATP7A and ATP7B are activated to export excess copper ions from the TGN and sequester them in copper-binding proteins such as metallothionein [ 40 ]. These proteins also regulate copper homeostasis by reducing uptake and increasing efflux. Mutations in ATP7A and ATP7B can lead to copper metabolism disorders such as Menkes disease and Wilson’s disease [ 41 , 42 ].
Evidence linking copper dyshomeostasis to atherosclerosis
Atherosclerosis is a chronic inflammatory disease characterized by plaque accumulation on the arterial walls, leading to narrowing and hardening of the arteries [ 43 ]. This process can result in serious complications, including coronary artery disease (CAD), stroke, and peripheral artery disease [ 44 – 46 ]. Growing evidence from numerous studies has linked copper dyshomeostasis and the resultant excess or deficiency of copper to atherosclerosis.
Evidence linking copper excess and atherosclerosis
Extensive research has revealed a correlation between elevated copper levels and cardiovascular disease (Table 1 ). For instance, several prospective cohort studies have shown a significant correlation between elevated serum copper levels and higher mortality rates related to cardiovascular diseases, particularly coronary heart disease [ 47 – 49 ]. Elevated copper levels have also been shown to be an independent risk factor for ischemic heart disease [ 50 ]. In addition to evidence from prospective studies, Stadler et al. directly detected and quantified transition metal ions in human atherosclerotic plaques and found increased copper levels in the diseased intima [ 51 ]. Studies on populations with acute myocardial infarction (AMI) also support these findings, with patients with AMI exhibiting significantly higher serum copper levels than those without AMI [ 52 ]. Moreover, an increase in serum copper levels among post-MI patients was found to have considerable diagnostic value for the occurrence of MI [ 53 ]. Furthermore, altered copper bioavailability is negatively correlated with carotid intima-media thickness (IMT), which may serve as a reliable predictor for early atherosclerosis in patients with obesity [ 54 ]. Furthermore, serum copper concentrations differ among patients with different carotid atherosclerotic plaque morphologies. Specifically, patients with hemorrhagic plaques have significantly higher serum copper concentrations than those with calcified plaques [ 55 ]. These findings suggest a potential involvement of elevated copper levels in the pathogenesis of atherosclerosis.
Evidence linking copper deficiency and atherosclerosis
Copper deficiency is also recognized as a major contributing factor to the development of atherosclerosis. This is evidenced by the benefit of high dietary copper and copper supplementation. The Institute of Medicine recommends a daily dietary allowance of 900 ug of copper for adults, with a tolerable upper limit of 10,000 μg/day to prevent liver damage [ 56 ]. Although recommendations vary between national authorities, most recommend an intake of 800 to 2400 ug/day [ 15 ]. Substantial research suggests that adequate copper intake reduces the risk of atherosclerosis. For example, Rock et al. found that copper supplementation in middle-aged individuals enhanced the antioxidative capacity of cells, which may help prevent vascular damage and thus reduce the risk of atherosclerosis [ 57 ]. Another cohort study showed an association between adequate dietary intake of copper (equal to or above the estimated average requirement) and a reduced risk of all-cause- and cardiovascular disease-related mortality. However, this association was limited to copper intake from food sources [ 58 ]. Studies on animal models corroborate these findings with Lamb et al., showing that dietary copper deficiency or excess increases susceptibility to atherosclerosis of the aorta in cholesterol-fed rabbits [ 59 ]. Similarly, copper supplementation was found to reverse pathological changes induced by dietary iron overload in mice, partially normalizing cardiac hypertrophy [ 60 ] and improving cardiac function in pressure overload-induced dilated cardiomyopathy [ 61 ]. Nevertheless, the effects of copper supplementation on the cardiovascular system remain controversial. A study by Diaf et al. conducted on middle-aged women from Algeria showed that there is little association between dietary copper intake and atherosclerosis prevalence in diabetes [ 62 ]. These conflicting results may be due to differences in study design and the dose and duration of copper supplementation.
Potential mechanisms of atherosclerosis development due to copper dyshomeostasis
The mechanisms of atherosclerosis development due to altered copper levels are not fully understood. Several potential mechanisms include oxidative stress, inflammation, endothelial dysfunction, and lipid metabolism.
Oxidative stress
Oxidative stress is a key factor in atherosclerosis development, which typically involves an imbalance in reactive oxygen species (ROS) production and antioxidant defenses [ 63 ]. Since copper is a redox-active metal, changes in copper levels can contribute to the generation of ROS and thereby promote oxidative stress [ 15 , 64 ] ( Fig. 3 ) .
Copper excess and oxidative stress
Elevated levels of free copper ions tend to interact more with hydrogen peroxide through Fenton reactions, leading to the production of highly-reactive hydroxyl radicals [ 65 ]. These radicals cause lipid peroxidation, protein oxidation, and DNA damage, ultimately contributing to the initiation and progression of atherosclerosis [ 66 ]. Lipid peroxidation is a chain reaction initiated by an attack of ROS on polyunsaturated fatty acids in cell membrane lipids, which then leads to the oxidative damage of lipid molecules. Increased copper levels can promote lipid peroxidation and result in the formation of oxidized low-density lipoprotein (ox-LDL), which is an atherosclerosis risk factor [ 67 ]. Furthermore, increased copper levels have been shown to reduce the activity of antioxidant enzymes, such as Cu/Zn-SOD, catalase, and glutathione peroxidase, in the red blood cells, whole blood, live tissue, and brain tissue of rats (67, 68, 69). In rat brain tissue, the decrease in SOD activity and glutathione levels arises because copper overload induces lipid peroxidation [ 68 ]. Copper is also capable of causing DNA strand breaks and the oxidation of bases via oxygen-derived free radicals [ 69 ]. These findings suggest that copper overload causes impairment of the antioxidant defense system in several tissues.
Copper deficiency and oxidative stress
Copper deficiency has also been suggested to contribute to the development and progression of atherosclerosis via oxidative stress. Copper is an essential cofactor of various antioxidant enzymes, including Cu/Zn-SOD, ceruloplasmin, and lysyl oxidase [ 70 ]. Copper deficiency may thus impair the function of these enzymes, leading to an impaired antioxidant defense system and increased susceptibility to oxidative stress [ 70 , 71 ]. This suggestion is supported by copper deficiency in rats being found to reduce Cu/Zn-SOD activity and increase oxidative damage to various subunits of the erythrocyte spectrin [ 72 ]. Decreased activity of SOD1 , which encodes Cu/Zn SOD, has also been observed in the liver and red blood cells of copper-deficient rats [ 73 ]. The decrease in SOD1 caused by copper deficiency leads to a reduction in NO levels, which may promote endothelial dysfunction, reduce vascular relaxation, and increase oxidative stress, thus ultimately contributing to atherosclerosis development [ 70 ]. In addition to affecting antioxidant enzymes, copper deficiency may also reduce COX activity and lead to oxidative inactivation of complex I (NADH: ubiquinone oxidoreductase). This oxidative inactivation may then lead to elevated production of ROS in copper-deficient cells, thereby exacerbating cellular oxidative stress [ 74 ].
Inflammation
Copper may also promote atherosclerosis development by modulating inflammatory responses associated with the disease.
Copper excess and inflammation
Excess copper has been implicated in atherosclerosis pathogenesis due to its ability to induce inflammation. High copper levels were previously reported to stimulate pro-inflammatory cytokine production and thereby promote inflammation within arterial walls [ 75 ]. For example, in primary cardiac cells, Cu 2+ increases the release of interleukin-6 (IL-6) and activates MAP kinases, which are linked to cardiac inflammation and hypertrophy [ 76 ]. Copper-induced oxidative stress also contributes to inflammation because excessive copper generates ROS, which causes oxidative damage to lipids, proteins, and DNA and promotes inflammation [ 15 ]. Furthermore, increases in ROS levels due to increased copper levels can, in turn, lead to the activation of nuclear factor-κB (NF-κB), a crucial protein involved in regulating the expression of proinflammatory genes [ 77 ]. These findings suggest that high copper levels can exacerbate inflammation of the vascular walls and thus promote atherosclerosis development.
Copper deficiency and inflammation
Copper deficiency has also been linked to atherosclerosis development through its impact on inflammation. Copper deficiency may result in decreased expression of adhesion molecules, such as ICAM-1 and VCAM-1, which facilitate leukocyte adhesion onto activated endothelial cells [ 78 ]. Evidence from animal studies also supports the role of copper deficiency in inflammation. For example, neutrophil accumulation increases due to increased ICAM-1 expression in the livers of copper-deficient rats following ischemia/reperfusion [ 79 ]. Further, this elevated ICAM-1 expression has been shown to activate neutrophils and endothelial cells, as evidenced by F-actin polymerization and increased accumulation of neutrophils within the lung microcirculation [ 80 – 82 ]. Pulmonary inflammatory responses are also intensified in copper-deficient animals [ 83 ]. Finally, copper deficiency impairs the activity of Cu/Zn-SOD as previously mentioned, leading to an increased accumulation of ROS and exacerbation of oxidative stress, which can further promote inflammation and atherosclerosis development [ 84 ].
Endothelial dysfunction
Copper dyshomeostasis has been linked to atherosclerosis, with both an excess and deficiency of copper contributing to endothelial dysfunction, a critical early step in atherogenesis.
Copper excess and endothelial dysfunction
Excessive copper can disrupt the balance between the production and degradation of nitric oxide (NO), a key regulator of vascular tone and endothelial function [ 85 ]. Increased copper levels can upregulate inducible NO synthase (iNOS) expression, resulting in excessive NO production and peroxynitrite, a potent oxidant that causes further oxidative damage [ 86 ]. Moreover, copper also interacts with atherosclerosis risk factors such as homocysteine and thereby causes increased hydrogen peroxidation and oxidative stress. Specifically, a study shows that incubation with homocysteine and copper for 4 h is able to cause significant damage to endothelial cells [ 87 ].
Copper deficiency and endothelial dysfunction
Copper deficiency is associated with increased endothelial dysfunction as well. Endothelial dysfunction can result in decreased production of NO, a vasodilator and inhibitor of platelet aggregation, and thereby promote a proatherogenic environment [ 88 ]. Copper deficiency can also lead to decreased NO levels by reducing the levels of SOD1. Reduced NO levels can then inhibit endothelial function and vasodilation, increase oxidative stress, and thereby promote atherosclerosis [ 70 ].
Lipid metabolism
Copper excess and lipid metabolism
Excess copper strongly contributes to atherosclerosis development via its effects on the ox-LDL, which plays a central role in atherosclerosis [ 89 ]. Specifically, copper participates in the oxidation of LDL particles, and alterations in copper levels may affect the susceptibility of LDL particles to oxidation. Excess copper levels can increase the oxidation of LDL particles and trigger the production of ox-LDL, which then contributes to atherosclerosis development [ 67 ].
In addition to its effects on ox-LDL, copper is involved in several forms of lipid metabolism, including fatty acid and cholesterol synthesis and lipoprotein metabolism. Alterations in copper levels may be associated with changes in lipid and lipoprotein concentrations. For example, serum copper and ceruloplasmin levels were found to be positively associated with lipid peroxides, total cholesterol, triglycerides, and apolipoprotein B in healthy individuals [ 90 ]. In another study, the serum copper levels of Iranian patients with angiographically defined CAD were found to be positively correlated with fasting serum triglycerides [ 91 ]. In vivo, evidence from animal experiments also supports the role of excess copper in lipid metabolism. A study on yellow catfish demonstrated that copper-induced endoplasmic reticulum stress and disrupted calcium homeostasis alter hepatic lipid metabolism, leading to increased lipid accumulation in the liver [ 92 ].
Copper deficiency and lipid metabolism
Insufficient copper levels may also contribute to atherosclerosis development by affecting lipid metabolism. Since copper is involved in the synthesis of fatty acids and cholesterol, insufficient copper levels may lead to impaired lipid metabolism and, thus, the development of fatty liver disease. Copper deficiency increases total cholesterol levels as well [ 7 ]. Insufficient copper levels can affect the activity of sterol regulatory element-binding proteins 1 and 2 (SREBP-1 and SREBP-2), which are transcription factors involved in fatty acid and cholesterol metabolism [ 93 ]. Specifically, both SREBP-1 isoforms (SREBP-1a and SREBP-1c) are involved in the regulation of fatty acid synthesis, whereas SREBP-2 is mainly involved in the regulation of cholesterol biosynthesis [ 94 ]. Low copper diets were found to induce accumulation of the mature form of SREBP-1 in the nuclei of rat liver cells, yet no change was observed in the DNA-binding site of SREBP-1 [ 95 ]. Hence, copper may play a role in regulating the subcellular localization of SREBP-1, potentially affecting its activity and, in turn, lipid metabolism. Further research is needed to fully understand the mechanisms through which copper affects the activity of SREBP-1 and other transcription factors involved in lipid metabolism.
Other risk factors
Copper levels are also associated with other atherosclerosis risk factors, such as high blood pressure. Copper inhibits the activity of angiotensin-converting enzymes, such that copper deficiency leads to increasing angiotensis levels and, thus, water and sodium retention and hypertension [ 96 ]. Moreover, copper is a cofactor of SOD, which is a major player in the antioxidant defense system. Hence, copper deficiency may increase the levels of superoxide free radicals, leading to elevated angiotensin levels and consequent hypertension [ 97 ].
Copper-induced cell death and atherosclerosis
From copper-induced cell death to cuproptosis
The discovery of copper-induced cell death dates back to the early 1980s [ 98 ]. Increased copper levels were found to promote ROS generation and thereby lead to oxidative stress, DNA damage, and, ultimately cell death [ 99 ]. These findings have led to further investigations into the molecular mechanisms underlying copper-induced cell death and their potential implications for human diseases. Indeed, conflicting findings suggest that, in addition to ROS accumulation, excess copper may induce cell death through apoptosis or caspase-independent cell death [ 100 , 101 ]. Overall, the mechanisms responsible for copper-induced cell death were not well understood.
However, in March 2022, Tsvetkov et al. published a groundbreaking study unveiling the mechanism of copper-induced cell death, which was termed cuproptosis [ 12 ]. Cuproptosis represents a unique form of cell death triggered by an excess of copper ions. In their study, Tsvetkov et al. showed that treatment with the copper ionophore elesclomol induced cell death. Remarkably, only the copper chelator can rescue cells from elesclomol-induced cell death. In contrast, rescue is not possible with any of the inhibitors for apoptosis, necroptosis, oxidative stress, ROS induced cell death, or ferroptosis. These findings unequivocally establish that cuproptosis is distinct from other known cell death modalities, underscoring its unique mechanisms and signaling pathways. Notably, cuproptosis is regulated by mitochondrial respiration, as supported by research indicating that mitochondria-dependent cells exhibit a sensitivity to copper ionophores nearly 1,000 times higher than cells undergoing glycolysis [ 12 ]. The importance of mitochondrial respiration in cuproptosis has been highlighted in further research, revealing a close correlation between cuproptosis and the tricarboxylic acid (TCA) cycle. During cuproptosis, intracellular copper binds to the lipoylated components of the TCA cycle. This leads to the aggregation of copper-bound lipoylated mitochondrial proteins, which can disrupt the TCA cycle and, therefore, interfere with cellular energy production. The upstream regulatory factors FDX1 and LIAS are critical in this process. Aggregation of these proteins and the subsequent reduction of Fe–S clusters, which are essential cofactors for various cellular processes, including electron transport and enzymatic reactions [ 102 ], promote proteotoxic stress and ultimately lead to cell death ( Fig. 4 ) .
Potential interactions between copper-induced cell death and other cell death pathways
Accumulating evidence suggests widespread cross-talk and interactions among the primary initiators, effectors, and executioners involved in pyroptosis, necroptosis, ferroptosis, and cuproptosis.
Pyroptosis
Pyroptosis is a pro-inflammatory programmed cell death that is primarily driven by inflammasome assembly, accompanied by GSDMD cleavage and IL-1β and IL-18 release [ 103 – 105 ]. NLRP1, NLRP3, NLRC4, AIM2, and pyrin are well-established inflammasome sensors that assemble canonical inflammasomes in a process induced by inflammatory stimuli such as those resulting from microbial infections [ 106 , 107 ]. Copper ions, which are essential micronutrients for many physiological processes, have been shown to trigger ROS production and activate the NF-κB signaling pathway. This leads to the upregulation of pro-inflammatory genes and cytokines, potentially influencing the progression of atherosclerosis [ 108 – 110 ]. Therefore, there is a plausible suggestion of crosstalk between copper ions and pyroptosis.
Evidence from animal studies demonstrates that copper loading in rat hepatocytes leads to a significant increase in the mRNA expression of pyroptosis-related genes (caspase-1, IL-18, IL-1β, and NLRP3) and the protein expression of caspase-1[ 111 ]. Similarly, copper exposure in mouse microglial cells triggers an inflammatory response, resulting in the upregulation of NLRP3/caspase 1/GSDMD axis proteins and subsequent pyroptosis. These effects are likely mediated by the early activation of the ROS/NF-κB pathway and subsequent disruption of mitophagy [ 77 ]. Moreover, comparable outcomes were observed in murine macrophages treated with copper oxide nanoparticles. Copper oxide nanoparticle exposure induces oxidative stress and activates NLRP3 inflammasomes, leading to the expression of pro-IL-1β through the MyD88-dependent TLR4 signaling pathway, followed by NF-κB activation in murine macrophages [ 108 ].
In summary, these findings demonstrate the presence of crosstalk between copper-induced cell death and pyroptosis. Copper exposure influences gene and protein expression associated with pyroptosis in various cell types, with the underlying mechanisms being ROS/NF-κB pathway activation and the inflammatory response. Further research is needed to elucidate the precise underlying mechanisms and explore the implications of this interaction.
Necroptosis
Necroptosis is a form of programmed necrosis linked to atherosclerosis via its potential involvement in plaque destabilization and rupture [ 112 , 113 ]. Necroptosis was recently shown to activate the NLRP3 inflammasome by causing potassium efflux through MLKL pores in macrophages [ 114 ]. Bioinformatics analysis further indicated an association between ZBP1 and cuproptosis as well as necroptosis [ 115 ]. ZBP1 activation led to the recruitment of RIPK3 and caspase-8 to activate the NLRP3 inflammasome, which in turn triggers necroptosis and pyroptosis [ 116 , 117 ].
Ferroptosis
Ferroptosis, a form of iron-dependent cell death triggered by lipid peroxidation and accumulation of lipid-based reactive oxygen species [ 118 – 120 ], appears to be influenced by copper levels due to the redox-active properties of copper ions [ 121 , 122 ]. Cuproptosis can modulate the expression of key genes involved in ferroptosis regulation, such as glutathione peroxidase 4 (GPX4), by eliminating phospholipid hydroperoxides [ 123 , 124 ] and acylcoenzyme A synthetase long-chain family member 4 (ACSL4) [ 125 ], thereby regulating the sensitivity of cells to ferroptosis inducers. Notably, copper chelators can reduce sensitivity to ferroptosis specifically while leaving other forms of cell death unaffected. In a study by Xue et al., copper was found to play a novel role in promoting ferroptotisis through the degradation of GPX4 via macroautophagy/autophagy [ 126 ]. Furthermore, research by Gao et al. reveals an interaction between copper-induced cell death and necroptosis. Their findings indicate that elesclomol administration to colorectal cancer cells increased Cu(II) levels in the mitochondria, downregulated ATP7A expression, and increased ROS accumulation. This process triggered SLC7A11 degradation, intensifying oxidative stress and resulting in ferroptosis [ 127 ]. Additionally, copper depletion greatly enhanced ferroptosis through mitochondrial perturbation and the disruption of antioxidant mechanisms [ 128 ]. Specifically, copper depletion limits GPX4 protein expression and reduces cellular sensitivity to ferroptosis inducers, establishing a direct link between copper levels and ferroptosis [ 128 ].
Based on the aforementioned studies, it is evident that copper is closely linked to necroptosis, pyroptosis, and ferroptosis, suggesting a significant cross-talk between these different cell death pathways. Understanding the underlying mechanisms that connect these modes of cell death is of paramount importance for the development of novel atherosclerotic therapeutic strategies that can target multiple pathways simultaneously.
Cuproptosis-related mitochondrial dysfunction and atherosclerosis
Mitochondria are vulnerable to copper-induced damage, which causes oxidative damage to its membrane [ 129 , 130 ]. Excessive intracellular copper may also disrupt mitochondrial function by altering the activity of several key enzymes, such as those involved in the tricarboxylic acid (TCA) cycle and oxidative phosphorylation [ 131 ]. Severe mitochondrial dysfunction and decreases in the activities of several liver enzymes, including complex I, complex II, complex III, complex IV, and aconitase, were observed in patients with copper overload. These effects are potentially mediated by the accumulation of copper in the mitochondria [ 132 ]. Furthermore, copper treatment induces selective changes in metabolic enzymes through lipoylation, which is a highly conserved lysine post-translational modification. Protein lipoylation occurs only on Dihydrolipoamide S-Acetyltransferase (DLAT), Dihydrolipoamide S-Succinyltransferase (DLST), Dihydrolipoamide Branched Chain Transacylase E2 (DBT), and Glycine Cleavage System Protein H (GCSH), all of which are involved in metabolic complexes that regulate carbon entry points to the TCA cycle [ 133 , 134 ]. Additionally, copper overload may trigger the opening of the mitochondrial permeability transition pore and cause the release of pro-apoptotic factors, ultimately resulting in cell death [ 135 ].
Mitochondrial dysfunction is implicated in atherosclerosis development and progression [ 136 ]. Impaired mitochondrial function promotes lipid accumulation, oxidative stress, inflammatory responses, and proliferation of vascular smooth muscle cells, all of which contribute to plaque formation and destabilization [ 137 , 138 ]. Considering its impact on mitochondrial function, cuproptosis may contribute to the progression of atherosclerosis by aggravating mitochondrial dysfunction.
Potential therapeutic strategies targeting cuproptosis in atherosclerosis
Copper chelators
Copper chelators bind and sequester copper ions. Several copper chelators have shown promise in animal studies and clinical trials for atherosclerosis treatment.
Tetrathiomolybdate (TTM) has been shown to be an effective copper chelator with potential therapeutic implications in attenuating atherosclerosis progression, as demonstrated in animal models [ 139 , 140 ]. In a study using ApoE -/- mice, TTM treatment for 10 weeks significantly reduced aortic lesion development, indicating its potential as an anti-atherosclerotic agent [ 139 ]. The beneficial effects of TTM were attributed to its ability to reduce bioavailable copper levels and inhibit vascular inflammation [ 140 ]. Copper is involved in vascular inflammation as an etiologic factor of atherosclerotic vascular disease, and by chelating copper, TTM may modulate copper-related pathways involved in atherosclerosis and inflammation. Wei et al. demonstrated that TTM copper chelation inhibited lipopolysaccharide (LPS)-induced inflammatory responses in mice aorta and other tissues. This inhibition may occur by suppressing redox-sensitive transcription factors NF-κB and AP-1, which play crucial roles in inflammation [ 140 ]. By targeting copper-related pathways and modulating inflammatory processes, TTM exhibits potential as an anti-atherosclerotic agent.
Ethylenediaminetetraacetic acid disodium salt (EDTA) is a broad-spectrum metal-chelating agent that has also been investigated for its potential to treat atherosclerosis [ 141 ]. For instance, a double-blind placebo-controlled trial ( N = 1708) shows that chelation therapy with disodium EDTA reduced the risk of adverse cardiovascular outcomes in stable patients with a history of myocardial infarction (MI). The primary endpoint occurred less frequently in the chelation group than in the placebo group (26% vs. 30%) [ 142 ]. These findings offer preliminary evidence to direct further studies but do not in themselves provide sufficient evidence to justify the routine use of chelation therapy in patients with MI. Another meta-analysis of five studies covering a total of 1,993 randomized participants failed to identify sufficient evidence to determine the effectiveness of chelation therapy in the treatment of atherosclerotic cardiovascular disease [ 143 ]. The contradictory results among these studies may be attributed to variations in research design and/or characteristics of the study populations. Therefore, further research is needed before routine use of chelation therapy can be recommended.
Regulation of copper chaperone protein expression
Copper chaperones play a crucial role in maintaining cellular copper homeostasis by facilitating the transfer of copper ions to target proteins and organelles. Modulating the expression of copper chaperone proteins potentially offers an alternative approach to regulate copper levels and limit the contribution of copper-induced cell death to atherosclerosis.
ATOX1 is a copper chaperone protein that is of major importance in mammalian cells. ATOX1 has been shown to translocate to the nucleus in response to inflammatory cytokines or exogenous copper. Furthermore, ATOX1 is localized in the nucleus of endothelial cells in the inflamed atherosclerotic aorta [ 144 ]. The migration of VSMCs is crucial for neointimal formation following vascular injury and atherosclerotic lesion formation. ATOX1 was found to promote VSMC migration and inflammatory cell recruitment to injured vessels [ 145 ]. Furthermore, copper-dependent binding of ATOX1 to TRAF4 is required to facilitate nuclear translocation of ATOX1 and ROS-dependent inflammatory responses in TNF-α-stimulated endothelial cells (ECs) [ 146 ]. This highlights the potential of targeting the ATOX1-TRAF4 axis as a novel therapeutic strategy for the treatment of atherosclerosis. In summary, ATOX1 represents a promising therapeutic target for inflammation-related vascular diseases such as atherosclerosis.
Copper ionophores
Copper ionophores transport copper into the cells, leading to an increase in intracellular copper levels and subsequent cell death. However, copper chelators can inhibit this process. Several drugs can act as copper ionophores, including disulfiram, pyrithione, chloroquine, and elesclomol [ 147 ]. Among these, elesclomol has received the most attention and has been subjected to several clinical trials for use in cancer treatment. Although the majority of these trials have not shown promising results regarding further development of elesclomol as a drug, they have verified its safety [ 148 ]. Nanomedicines combining copper ions with copper ionophores are also currently being widely investigated. Recently, the researchers successfully designed multifunctional nanoparticles with pH-responsive and CD44-targeted properties. The utilization of dendritic mesoporous silica nanoparticles capped with copper sulfide and coated with hyaluronic acid enables precise drug delivery and controlled release in the acidic microenvironment of atherosclerotic inflammation [ 149 ]. These findings highlight the potential of copper-based nanomedicines in developing innovative approaches for targeted atherosclerosis therapy. Notably, an important aspect to consider in the advancement of copper ionophores for clinical therapeutic applications is the notable impact that slight structural changes can exert on their properties and functions [ 16 ]. In addition, copper ionophore-mediated cell death is strongly correlated with mitochondrial metabolism and closely associated with atherosclerosis development. Based on these findings, copper ionophores may represent a novel therapeutic strategy for targeting copper-induced cell death in atherosclerosis. | Author contributions
MW and LL provided the guidance for this study. SY prepared the original manuscript and the illustrations. YL, LZ, and XW assisted with manuscript review and revision. All the authors have read and approved the final version of the manuscript.
Funding
This study was supported by the National Natural Science Foundation of China (Nos. 81202805, 82074254, 82374281), the Beijing Natural Science Foundation (No. 7172185), and the Science and Technology Innovation Project of China Academy of Chinese Medical Science (C12021A01413).
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:56 | Cell Death Discov. 2024 Jan 13; 10:25 | oa_package/75/58/PMC10787750.tar.gz |
|||
PMC10787751 | 38218988 | Introduction
Fish skin has been used as a biological dressing in the management of burns 1 – 9 and wounds 10 . It has also been used as a biological graft in neovaginoplasty in cases of vaginal agenesis 11 , 12 . The histological characteristics of fish skin and human skin are similar 13 . The fish skin contains a high amount of collagen, making it a suitable biomaterial for tissue engineering 9 , 13 , 14 .
Fish skin perishability still poses the most enormous preservation difficulties 15 . It must be kept chilled or frozen, and even then, it has a very short shelf life 16 . Several diverse damage mechanisms, including microbiological spoilage, autolytic degradation, and lipid oxidation, are responsible for the depreciating of fresh fish skin during storage 17 . Therefore, most clinical practices have used fish skin grafts in fresh form for wound and burn management 1 , 3 , 5 – 10 .
Lyophilization, known also as freeze drying or cryodesiccation, is a low-temperature dehydration process involving freezing the product, lowering the pressure, then removing the ice by sublimation. This is in contrast to dehydration by most conventional methods use heat to evaporate water 18 .
Lyophilization can be a potential method for the long-term storage of fish skin grafts. In a previous study, lyophilization was addressed as a method for preserving fish skin grafts 2 . However, applying several detergents and sterilizing agents (e.g., chlorhexidine) to fish skin before lyophilization raises concerns about the integrity of collagen and the targeted material in the fish skin 2 , 9 , 10 , 13 . Moreover, the histopathology and microbiology of these lyophilized fish skin grafts were not evaluated before being used clinically.
The microbial colonization of the wound is an essential factor that affects the healing process of wounds 19 . Different sterilization methods have been described for biological dressings. These include physical methods such as irradiation and chemical techniques including treatment with chlorhexidine, povidone iodine, ethylene oxide gas, silver nanoparticles, and ozone 10 , 13 , 20 . Gamma irradiation represents an effective sterilization method as it has direct and indirect effects on microbial DNA 21 . Different lengths of Gamma irradiation have been used to sterilize fish skin (25, 30, and 50 kGy) without reference to its impact on collagen content 1 , 2 , 13 or antimicrobial efficiency 2 , 9 .
The current study’s objectives are (1): to describe the optimized method of Tilapia skin lyophilization and (2): to define the standard length of gamma irradiation for sterilization of lyophilized Tilapia skin with special consideration to collagen integrity and microbial count of fish skin. | Materials and methods
Ethics statement
The Research Ethics Committee (REC) of the Faculty of Veterinary Medicine, Assiut University, Assiut, Egypt, has approved all the procedures in this study in accordance with the Egyptian bylaws and OIE animal welfare standards for animal care and use in research and education. All methods were performed in accordance with relevant guidelines and regulations.
Fish skin sampling
The fish skin was collected from fresh Nile tilapia (Oreochromis niloticus) (weigh: 620 ± 35 gm; standard length: 20 ± 3 cm), obtained from The Aquatic Medicine Unit, Faculty of Veterinary Medicine, Assiut University, Assiut, Egypt . Fish were euthanized physically by decapitation. After removal the fish scales, the skin was dissected from the underlying tissue, cut into strips (6 × 2 cm), and then washed in sterile normal saline. Lyophilization was performed on skin strips.
Fish skin lyophilization
Lyophilization was carried out in a freeze dryer (LYOQUEST, SPAIN, Serial No.: 1812, manufacture company: Telstar). Fresh fish skin strips were placed in the flasks of the freeze dryer apparatus at − 80 °C for 24 h. Skin strips were then subjected to a cold vacuum for 5–6 h. Lyophilized skin strips were then vacuum-packed (Fig. 1 A–C).
Gama (γ) irradiation sterilization of lyophilized fish skin
Gamma irradiation was performed at a gamma station ( 60 Co gamma cell (2000 Ci), 30 ± 5 °C, 1.5 Gy/s, 150 rad/s), the National Center for Radiation Research and Technology (NCRRT) of the Egyptian Atomic Energy Authority, Cairo, Egypt. The vacuum-packed lyophilized Tilapia skin was subjected to Cobalt 60 γ sterilization at 5, 10, and 25 kGy 2 . Sterilized skin strips subjected to microbiological and histopathological evaluations after each treatment 15- and 30-days post-sterilization (3 skin strips each).
Microbiological evaluation
Following Morton et al. 22 , the dilution plate method was employed to count the lyophilized fish skin microbiology under different lengths of gamma irradiation (0, 5, 10, and 25 KGy) after 15 and 30 days of irradiation. Fish skin strips were swabbed with a sterilized swab needle and suspended in 10 mL of sterile saline solution. For microbiological counting and identification, various media types were employed; nutrient agar and MacConkey agar media to count all aerobic bacteria, yeast malt extract medium (YME) to count the aerobic yeasts, and potato dextrose agar medium (PDA) to count the fungi 23 – 25 . One milliliter (1 mL) of the saline-swabbed solution was added to sterilized Petri dishes before the dishes were filled with sterilized isolation medium, three replicates were performed for each medium, and left to solidify. For bacteria, plates were incubated for 24 h at 35 °C, for yeasts, for 72 h at 28 °C, and for fungi, 7 days at 28 °C. The developed colony count was estimated as CFU/cm 2 . Using the same media, developed colonies with variations in their morphological characteristics, such as size, color, colony edge, and pigmentation, were sub-cultured and purified for identification using Bergey’s Manual of Systematic Bacteriology 26 , Yeasts: characteristics and identification 24 and fungal identification 27 .
Histological evaluation
Fish skin strips (0.5 × 0.5 cm) were fixed in 10% neutral buffered formalin, routinely processed, and subsequently embedded in paraffin. Afterwards, they were sectioned into 5 μm thick sections and stained with Mayer’s hematoxylin (Merck, Darmstadt, Germany) and eosin (Sigma, Missouri, USA). After microscopically examining the slides, histological evaluations were performed blindly on coded samples, with a comparison to control group. Collagen fibers integrity and organization were assessed histologically based on 0–3 scale 10 , 28 , 29 . Collagen fibers integrity scores: 0 = continue, long fiber, 1 = slightly fragmented, 2 = moderately fragmented, 3 = severely fragmented. Collagen fibers organization scores: 0 = compact and parallel, 1 = slightly loose and wave, 2 = moderately loose, wavy and cross to each other, 3 = no identifiable pattern.
Histochemical evaluation
The collagen content evaluation was carried out by the Gomori’s trichrome stain 30 . After deparaffinization in xylene, paraffin-embedded sections were and rehydrated in a graded series of ethanol solutions (into 0.1 M phosphate-buffered saline (PBS), pH 7.2) to distilled water. Sections were stained with Gomori’s trichrome according to the manufacturer’s protocol, dehydrated in graded alcohol, made transparent with xylene, and mounted. Slides were then microscopically examined to verify collagen staining with green. The collagen content was evaluated based on the depth of the green staining.
The percentage of collagen-positive area was calculated by ImageJ (1.48v) using threshold area fraction determination. The amount of collagen was expressed as a percentage from the total number of pixels in the optical view as a percentage and expressed as mean ± SEM.
Statistical analysis
One-way ANOVA followed by Tukey's post hoc test was performed using GraphPad Prism software version 8.0.1 (GraphPad Software Inc., La Jolla, CA, USA). P values < 0.05 were considered statistically significant. | Results
Microbiological evaluation of the lyophilized fish skin after gamma irradiation
Gamma irradiation exhibited an efficient sterilizing effect on fish skin surface microbiota. Different lengths of gamma irradiation (5, 10, and 25 KGy) were applied to the lyophilized fish skin. The microbial counts of aerobic bacteria, aerobic yeasts, and fungi were detected 15- and 30- days after the irradiation as cleared in Fig. 2 A–C. It was clear that gamma irradiation is a great microbial sterilizer, especially with the high length of 25 KGy that inhibited the aerobic bacterial counts significantly by 98.53% and 98.96%, aerobic yeast counts by 99.2% and 99.8%, and fungal counts by 98.48% and 99.25% 15- and 30- days after irradiation, respectively, ( P < 0.05). Gamma irradiation also maintained the low microbial skin counts for a longer period (30 days) and remarkably at the low length of 5 KGy, gamma irradiation was effective against aerobic bacteria.
By increasing the gamma irradiation, the total counts of aerobic bacteria decrease dramatically giving 92.67 ± 2.62 (88.36% inhibition), 39.33 ± 4.92 (95.06% inhibition), and 11.67 ± 1.7 (98.53% inhibition) at 5, 10, and 25 KGy 15 days post-irradiation, respectively, ( P < 0.05). At 30 days post-irradiation, the bacterial counts were 74.33 ± 2.49 (88.93% inhibition), 32 ± 5.7 (95.24% inhibition), and 7 ± 1.6 (98.96% inhibition) at 5, 10, and 25 KGy, respectively, compared to the untreated sample (796 ± 9.93 and 671.7 ± 7.4) 15- and 30-days after irradiation, respectively, ( P < 0.05).
For yeasts, it was clear that by increasing the length of gamma irradiation, the total counts of yeasts decreased significantly after 15- and 30-days irradiation giving 208.67 ± 6.13 (55.25% inhibition), 19 ± 1.63 (95.93% inhibition), and 3.67 ± 0.47 (99.2% inhibition) at 5, 10, and 25 KGy, respectively, at 15 days ( P < 0.05). At 30 days, the yeast counts were 104.67 ± 4.9 (76.21% inhibition), 8.7 ± 1.2 (98.03% inhibition), and 1 ± 0 (99.77% inhibition) at 5, 10, and 25 KGy, respectively, compared to the untreated sample (466.3 ± 2.87 and 440 ± 5.72) 15- and 30-days after irradiation, respectively, ( P < 0.05).
Filamentous fungi had the same criteria as bacteria and yeasts, the total counts of fungi decreased significantly ( P < 0.05) with increasing the length of gamma irradiation giving 95.7 ± 4.9 (45.54% inhibition), 34.7 ± 3.7 (80.27% inhibition), and 2.67 ± 0.4, (98.48% inhibition) at 5, 10, and 25 KGy, respectively, after 15 days. After 30 days, the yeast counts were 55.67 ± 4.9 (68.49% inhibition), 15.7 ± 3.3 (91.13% inhibition), and 1.3 ± 0.1 (99.24% inhibition) at 5, 10, and 25 KGy, respectively, compared to the untreated sample (175.7 ± 3.7 and 176.67 ± 3.8) 15- and 30-days after irradiation, respectively.
By investigating the microbial species present on the fish skin, Bacillus sp., Escherichia coli, Micrococcus luteus, and Serratia marcescens were the dominant aerobic bacteria, Candida sp., Saccharomyces sp. and Rhodotorula sp. were the dominant aerobic yeasts, whereas Aspergillus niger, A. flavus, A. fumigatus, and Rhizopus stolonifer were the dominant aerobic fungi .
Impact of lyophilization on fish skin
In the control group, fresh fish skin was examined histologically, and the collagen fibers were tightly packed, well-organized, parallel-distributed, and no evidence of disaggregation (Fig. 3 A,B). In addition, the lyophilized fish skin rehydrated in normal saline for 15 min showed the preservation of the collagen fibers in a well-organized pattern with no signs of disaggregation (Fig. 3 C,D).
Histological evaluation of the lyophilized fish skin after gamma irradiation
In the lyophilized fish skin subjected to gamma irradiation at 5, 10, and 25 kGy, histological examination of the fish skin 15-day post-sterilization revealed slightly to moderately disorganized and disaggregated in irradiated skin at 5 and 25 kGy (Figs. 4 A,E, 6 A,B) ( P < 0.05). However, the collagen fibers were well organized in a parallel pattern at 10 kGy (Fig. 4 C). At 30-days post-sterilization, the collagen fiber structure did not change in the irradiated skin at 5 and 10 kGy (Figs. 5 A,C, 6 A,B) ( P < 0.05), with disorganization and disaggregation were more prominent at 25 kGy (Figs. 5 E, 6 A,B) ( P < 0.05).
Histochemical evaluation of the fish skin
The collagen fibers were also stained with Gomori’s trichrome stain and the collagen intensity in the skin was measured using ImageJ (1.48v). At 15- and 30-days post-sterilization, the collagen disposition and intensity in skin samples irradiated at 5 and 10 kGy did not show significant differences compared with the control (Figs. 4 B,D, 5 B,D, 6 C). However, collagen deposition and intensity were significantly decreased in skin samples exposed to 25 kGy after 15 and 30 days of irradiation (Figs. 4 F, 5 F, 6 C) ( P < 0.0001). | Discussion
Recently, biological dressings have become indispensable in modern strategies of burn and wound management. The current study investigated the use of lyophilization as a method for the long-term preservation of fish skin grafts in a manner that maintains the integrity of fish skin collagen. Moreover, the study revealed that gamma irradiation at a length of 10 KGy is optimal for sterilizing the lyophilized fish skin grafts without disrupting the collagen content.
Autologous skin grafts have a limited availability and add an additional scaring 31 . Allografts have many limitations such as the finding of suitable donors and the risk of infection transmission, especially, viral infections 20 . Therefore, various xenografts have been described as biological wound dressings in humans. This includes the bovine embryo skin 32 , bovine amnion 33 , canine skin 34 , frog skin 35 , and porcine skin 36 – 38 . However, there are concerns about their clinical application due to immunological causes, zoonotic risk, and religious beliefs 20 , 39 , 40 .
The fish skin was found to be consistent with human skin 8 , 13 and has a higher collagen content than other skins 9 , 13 , 14 , which is a potential wound healing promotor 41 . Moreover, fish skin has advantages such as low antigenicity, a high potential adherent to the skin, a lower risk of transmitting diseases, and a degree of moisture symbolizing the human skin 8 , 42 . Fish skin graft is highly porous. Since the pore size is within the range of typical cell size, fish skin is well suited to support cell ingrowth 43 . Therefore, the fish skin has recently been suggested as a potential xenograft 1 – 7 , 9 – 12 .
Although the tilapia skin contains a normal non-infectious microbiota, it may be exposed to microbial contaminants from a contaminated water environment 7 , 44 . Bacteria, yeasts, fungi, and viruses represent serious fish contaminants 45 . Enterobacteriaceae, yeast, and aerobic spoilers are the main spoilage microorganisms of fish during the storage 46 . Due to the high-water activity, and low acidity in the fish skin, bacteria proliferate quickly causing its spoilage 47 . Therefore, in most studies, Tilapia skin was used as a biological dressing graft for burn or wound treatment shortly after its processing 1 , 3 – 7 , 9 , 10 . This creates an urgent need to find a way for long-term preservation and storage of the fish skin grafts.
Various methods have been used for the preservation of different biological dressings, such as cryopreservation 48 , 49 and glycerolization 4 , 5 , 7 , 9 , 50 – 52 . Although these methods could prolong the shelf life of biological dressings, they may also affect their effectiveness 53 .
Lyophilization or freeze-drying is a specific dehydration process, which provides benefits to the final product through improved composition stability and reduced water content to low levels, which allow the cell to survive during long storage 18 , 54 . Microbial cells need high stabilization processes to survive on the lyophilized parts. Otherwise, they will be damaged and/or die 55 .
Lyophilization has been addressed in previous studies for fish skin preservation 2 , 3 . However, using a rigorous process of chemical sterilization of the fish skin before the lyophilization raises concerns about collagen integrity and other biological components potential for healing 43 , especially, since these studies were not followed by histological evaluation before clinical application 1 , 2 , 7 . Moreover, using several sterilizing steps with several chemicals make such procedures cost-effective and impractical for industrial application. Chlorhexidine is commonly used to sterilize and disinfect grafting procedures. However, many bacterial spores or mycobacteria are chlorhexidine-resistant 56 , 57 . It also has a low activity against viruses. A potential limitation of chlorhexidine is its cytotoxicity and alteration in the biochemical properties of collagen 10 , 58 .
Here, this study describes the lyophilization of fish skin without using any disinfectants during the processing of fish skin grafts to ensure the collagen safety and subsequently its effectiveness. This was confirmed by histological and histochemical evaluations of the lyophilized fish skin after rehydration in normal saline for 15 min that there was no change in the arrangement and content of the collagen fibers of the fish skin.
Sterilization of fish skin with gamma irradiation represents an efficient sterilizing method for the fish skin microbiota. High-energy gamma irradiations are released by several radioisotopes, including the comparatively cheap byproducts of nuclear fission Caesium-137 (137Cs) and Cobalt-60 (60Co). By subjecting the plentiful, non-radioactive Co-59 isotope to neutron irradiation inside a nuclear reactor, radioactive Co is created. Then, after emitting one electron and two gamma rays with energies of 1.17 MeV and 1.33 MeV, Co-60 atoms decompose into nonradioactive Ni-60 atoms. Gamma rays don't have enough energy to cause radioactivity in other materials because they are released isotropically. Similar to x-rays, gamma rays have a shorter wavelength and higher energy. Thus, Gamma rays are appealing for industrial sterilization of materials with a significant thickness or volume, such as packaged food, medical equipment, or medical supplies 59 .
The combination of lyophilization and gamma irradiation inhibited the microbial growth by 99.8%. Gram-negative and Gram-positive bacteria, as proteobacteria and the Lactobacillales, are more vulnerable to Gamma irradiation than spore-forming bacteria, such as Bacilli and Clostridia 60 . This is in accordance with our findings that Bacillus sp., Escherichia coli, Micrococcus luteus, and Serratia marcescens were the dominant aerobic bacteria. Similar findings were recorded by Dharmarha et al. 61 , who found that Gamma irradiation decreased Pseudomonas sp., Escherichia coli , and Yersinia sp. total counts highly than spore-forming bacteria. Gamma irradiation was found to have an effect on the microbial DNA by altering its composition 21 . It reacts with water molecules forming free radicals damage DNA and breakage the DNA double strand with non-repaired effects causing microbial cell death 62 .
An infection is deemed to exist when there are more than 10 5 CFU of bacteria per gram of skin graft 63 . On wound beds with more than 10 5 bacteria/g of tissue, the absorption of a skin graft is decreased 64 . In our results, the total bacterial counts were less than 35 CFU/cm 2 at 10 and 25 kGy after 30 days of sterilization, indicating the high sterilizing efficiency of these irradiations; however, 10 kGy is preferred as it maintains the collagen integrity and content. Also, the total counts of yeasts and fungi were less than 10 and 16 CFU/cm 2 , respectively, at 10 and 25 kGy after 30 days. In accordance with our results, clinical investigations found that skin transplant failure occurred in wounds that were significantly infected with 10 7 pathogens 65 , 66 . Also, Nsaful et al. 67 found that wound graft failure mainly occurs by the microbial contamination, especially bacterial contamination with the bacteria counts ranged from 3.7 × 10 5 to 9 × 10 5 CFU/cm 2 .
In accordance, histological evaluation of the fish skin grafts revealed that the exposure to gamma irradiation at 25 kGy caused the collagen fibers to dissociate and disintegrate with a reduction in the collagen intensity, whereas exposure at 5 kGy altered collagen fiber arrangement and integrity. However, the exposure to gamma irradiation at 10 kGy showed the preferred intensity since it preserved the collagen fiber content and intensity in the skin graft.
The limitation of this study is the lack of histological and microbiological assessment of the sterilized vacuum-packed lyophilized fish skin grafts on further long period (6 and 12 months). Therefore, future studies should be conducted to address this concern. | Conclusion
This study established an optimized method for tilapia skin lyophilization. It defined the standard length of gamma irradiation for sterilizing lyophilized tilapia skin, considering the collagen integrity and microbial count of fish skin. These processed, sterilized, vacuum-packed, lyophilized fish skin patches could be suitable for long-term storage for burn and wound management in hospitals and medical centers. Further clinical studies are still being conducted on this product. | The introduction of fish skin as a biological dressing for treating burns and wounds holds great promise, offering an alternative to existing management strategies. However, the risk of disease transmission is a significant concern. Therefore, this study aimed to examine how established sterilization and preservation procedures affected fish skin grafts' microbiological and histological properties for long-term usage. Lyophilization of the fish skin graft followed by rehydration in normal saline for 15 min did not change the collagen content. Furthermore, gamma irradiation of the lyophilized fish skin graft at different lengths 5, 10, and 25 KGy showed a significant reduction in microbial growth (aerobic bacteria, aerobic yeasts, and fungi) at 15- and 30 days after the irradiation. However, exposure to 10 KGy was found to be the most effective intensity among the different gamma irradiation lengths since it preserved the collagen fiber content and intensity in the lyophilized fish skin grafts at 15- and 30 days after the irradiation. These findings provide efficient preservation and sterilization methods for long-term usage of the fresh Tilapia skin grafts used for biological dressings.
Subject terms
Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB). | Author contributions
A.I.: Conceptualization, Methodology, Validation, Data curation, Visualization, Investigation, Supervision, Writing- Original draft preparation, Writing- Reviewing and Editing. H.F.: Conceptualization, Methodology, Data curation, Investigation, Writing- Original draft preparation. G.A.-E.M.: Methodology, Visualization, Investigation, Writing- Original draft preparation. A.E.: Methodology, Validation, Data curation, Investigation, Writing- Original draft preparation. M.S.: Methodology, Validation, Data curation, Software, Visualization, Investigation, Supervision, Writing- Original draft preparation, Writing- Reviewing and Editing.
Funding
Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB).
Data availability
All data generated or analyzed during this study are included in this published article.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:56 | Sci Rep. 2024 Jan 13; 14:1253 | oa_package/80/57/PMC10787751.tar.gz |
|
PMC10787752 | 38218958 | Introduction
When matter vibrates, it transmits energy waves into our ears and produces sound. The sound changes depending on two factors: the material of the object and how slow or fast the object vibrates, making different sound waves. In other words, the object’s material affects the timbre, while the speed of object vibration changes the pitch. Sound event classification is a technique used to distinguish the actions involved in the audio. The action generated from our environment, for example: music genre, human speech, water running, animal sound, etc. Nowadays, sound event classification is a common issue, and is usually addressed in the following pipeline: (1) audio pre-processing: the audio data needs to be transfer to the proper signal that can be recognized by learning algorithm. (2) feature extraction: to make the classification algorithm more efficient and accurate, the high level represented features are extracted from the raw signal. For example, the Short-time Fourier Transform (STFT), and Constant-Q transform are famous feature extraction techniques. (3) Sound event algorithm: once the extracted feature is prepared, the algorithm is developed to predict the label of the sound event with the input features. In the early 20 s, experts focused on generating powerful audio descriptors 1 . These are usually divided into three sets: low-level features, score features, and mid-level features. Some hand-created music features are involved in these sets, for example: spectral centroid 2 , and pitch salience 3 . Score features, such as mode 4 and pitch density 5 , are directly calculated from the musical score. In general, mid-level features 6 are intuitively understandable by the typical learner. Nowadays, the deep learning algorithm has made tremendous progress by integrating feature extraction and decision-making algorithms for classification. With many trainable parameters, the network can directly learn to map the extracted features from input to target labels. Also, as the number of labeled samples increases, the network becomes more robust and reliable. Reference 7 built a sound event classification system by adapting the traditional convolutional neural network from the image domain to the audio domain. The sound event classification system performs well, even in a noisy environment. In common, it is challenging to recognize non-stationary sound under the interference of ambient noise. Reference 8 proposed a two-stream convolutional neural network. One stream directly processes the clip of the raw audio sequence, while the other tries to learn rich representation features from the log-Mel spectrogram. Recently, the transformer network has been receiving more attention. Reference 9 explores the transformer structure to detect the sound event based on audio tagging and temporal location information.
However, in addition to such known sound events, there exist many unknown sound events in our daily lives, which include rhythm, scene, instrument sound, etc. So, the question, “what is that sound?” has frequently come up. Usually, the machine learning algorithm tries to learn categories on a closed set, where the training and test data share the same feature embedding spaces and labels. The training data includes samples with target labels (A, B, C), while the test data restricts the labels to (A, B, C). The classes like D, F, etc. cannot appear on the test. However, in realistic scenarios, it is inevitable to introduce unknown observations D, F, or others into the dataset. The task to learn only with classes (A, B, C) and predict to categorize (A, B, C) or other unexpected D, F, or other classes is called open set recognition. In the open set recognition task, there are several categories (Fig. 1 ) assumed to characterize the learning models 10 : (1) Known Known classes (KKCs): the samples are annotated by humans and given with the corresponding labels. (2) Known Unknown classes (KUCs) 11 : The KUC is an umbrella term for different events, objects or concepts. It is hard to define the attribute of the KUC, which is usually a particular product incidental to the description of the subject of the thing. (3) Unknown Known classes 12 : Classes for which no samples are available during training, but side information, such as semantic/attribute information, is available. The algorithm to detect unknown known classes is called few-shot open set recognition, a subset scope 13 – 16 of open set recognition. (4) Unknown Unknown classes (UUCs) 17 : The sample of category does not exist during the training, as either the corresponding attribution or semantic information is not known. Our task focuses on unknown classes. The algorithm only learns in labeled known classes, and is able to detect unknown classes D, F, or others, even though we do not know, and cannot describe, what they are.
In this paper, we deal with classifier unknown sound events or their related patterns. In summary, our work makes the flowing key contributions: We proposed an open set sound events classification network. The encoder receives the 2-D log-Mel temporal spectrogram as input, and the decision head determines the probabilities of the known sound events. In our scheme, we can finally identify unknown sound events from the probabilities. It is universally acknowledged that labeling audio data is a challenging task. Despite the neural network requiring a mass of labeled samples to learn, more data usually performs better. We explored a self-supervised learning approach in the open set sound events classification problem. The network was first learned in the unlabeled MagnaTagATune/DCASE2019 Subtask 1C dataset, utilizing the self-supervised contrastive loss to capture the rich representative features of sound events. Then, the learned network fine-tunes for the downstream dataset. The best performance of known classes is still effective for unknown detection because the compact cluster structure provides sample space to detect unknown samples in the embedded feature space 18 . For compact cluster structure, the open set sound events classification network has been optimized by center 19 , and supervised contrastive losses 20 , as well as cross-entropy loss. We evaluate our proposed method in various datasets. The experimental results reveal that both the self-supervised approach and the proposed losses improve the performance of open set classification of sound events. | Methodology
Pipeline overview
Figure 2 shows the pipeline of the classifier of unknown events. Our system can divide into two stages, the self-supervised training stage, and the fine-tuning stage.
In the self-supervised training stage, the raw audio sequences first transform into different audio sequences, but the intra meaning of audio is not modified. Some audio transformation techniques can be applied to increase the number of training samples, such as pitch variation, and noise injection. The raw audio and transformed audio are scale invariant. Then, we generate the log-Mel spectrogram ( ) from the raw and augmented sequence. The encoder network obtains the discriminate features ( ). Next, we construct the projection head to obtain representative features ( ). The self-supervised contrastive loss projects the features ( ) into the clustering embedding space. The distance between different audio sequences becomes as far apart as possible, while the identical audio sequences become closer together.
Once the encoder part has been trained, we fine-tune the network parameters in the downstream task. In this stage, the network jointly learns both clustering losses and fundamental classification loss (cross-entropy loss). The clustering losses are center loss and supervised contrastive loss, where the center loss builds a compact clustered feature space by clumping the intra-class data samples. Conversely, the supervised contrastive loss projects objects into the well-structured high-dimensional embedding space, where the inter-class frames are scattered by pushing each other and intra-class ones are compacted by pulling each other.
Self-supervised training
Effective data augmentation
For effective contrastive representation learning, we propose various audio data augmentation techniques 57 for generating positive/negative audio segment samples: Polarity Inversion: the audio is an analogue signal, which simulates the response of the change in the sound source (e.g., the violin strings struck or the drum vibrating). The polarity often shows positive and negative with the given median line, which refers to waveform alignment. We augment the audio by flipping the polarity. Noise injection: In the most real word, the poor quality of audio may be affected by the old recording device or signal transmission. We add some white noise to simulate this phenomenon. Delay: We randomly play the audio back after a period. Sometimes it may create an echo-like effect, but this does not affect the prediction of sound events. High/Low pass filters: We apply high/low pass filters for the waveform. The pass filters can be characterized by cut-off frequency. The ambient sounds are usually a mix of different sound sources. We might use the high/low pass filters to strengthen the specific sound source. Gain: The modification of gain often reflects the change in the volume of auditory sensors. The gain controls the microphone preamp by changing the voltage going in. Even though the gain control may sometimes cause distortion or grit for audio, the gain augmentation is helpful to increase the number of samples in some cases. For example, the audio recordings in different public places have different audio characteristics.
We generate positive and negative sample instances by randomly combining the augmentation methods. Specifically, we select the inversion polarity, noise injection, and delay, high/low pass filters, and gain with probabilities of 0.8, 0.3, 0.5, 0.2, and 0.8, respectively. Noise is injected randomly with a signal-to-noise ratio between 0.3 and 0.5, and other settings are default values from the torchaudio_augmentations library.
Self-supervised representation loss
We choose the InfoNCE 54 , 56 to address inter-class dispersion and intra-class compactness 58 to learn the useful representation in audio: where, is the collection of arbitrary augmented samples from the same audio sequence. is the temperature parameter, and and are the positive and negative embedding for anchor ( ), respectively. is the set of negative pairs for the anchor. The positive and negative pairs ( ) are extracted from the augmented patches of the log-Mel spectrogram. For example, the pair of augmented spectrogram patches ( , ) or the genuine pair of are the negative pairs, while ( A, A ) and , are the positive pairs, even if we do not know the label of the A and B patches. The function of measures the cosine similarity between two input embeddings. During the training, the loss minimizes the distance of all positive pairs, and disperses others (negative pairs).
Supervised learning and fine-tuning
Training of the proposed model (Fig. 2 b) minimizes the three types of losses: cross-entropy loss, center loss, and supervised contrastive loss:
In this section, we describe the implementation of the three types of losses in detail.
Cross-entropy
For the labeled data in a batch , the cross-entropy loss is defined by: where, N spans the mini-batch dimension, and is the logit value of the -th class for the log-Mel spectrogram . The dataset has K classes. This cross-entropy loss tries to make the correct classification of labeled inputs.
Center loss
The center loss 19 gains inspiration from the clustering algorithm, and was first used to solve the face identification task. Intuitively, the key is to reduce the intra-class variation. The center loss builds a center of each category , where denotes to the class center of each feature in terms of class . The center loss learns to minimize the distance between the embedding space of the audio the input and class center. For the classification task, the center should be updated by calculating the mean value of the entire training set, which sacrifices computation efficiency. To address this problem, center loss 19 uses the gradient descent method to update the center in the mini-batch and obtain the average centroid for all samples.
Supervised contrastive loss
The contrastive loss for supervised learning is defined as:
The sim function uses cosine similarity: , where and denote the sets of the positive and negative samples of the anchor (the first) data in a batch. Different from the self-supervised contrastive loss, the supervised approach uses the data label efficiently by exploring the correlation between the same categories. The positive sample of anchors can be the sample of the same label; meanwhile, the negative anchor is the sample from different categories. Specifically, we randomly select a few samples as anchor data ( ) from the mini-batch, ensuring each category has at least one anchor. Anchor samples within the same category in the mini-batch are categorized as positive samples, while those from different categories are designated as negative samples.
Identify unknowns in open set
The straightforward approach for unknown decision is to use a threshold. In our work, we applied the MSP method 22 to find the unknown samples: where, the is the softmax normalization probability for the input sample in terms of the k -th class. If the max value of all the softmax probabilities is smaller than the threshold , the unknown class has been successfully detected. Otherwise, the class label is the indices of the maximum softmax probability value. | Experiment and result
Evaluation metrics
We evaluate the performance of the open-set sound event recognition model by two evaluation metrics:
AUROC (area under the receiver operating characteristic)
It is a threshold-independent evaluation metric from the receiver operating characteristic curve (ROC), which shows the true positive rate, and false positive rate of the model’s predictions for different thresholds. AUROC is used to evaluate the performance of a model to identify unknown classes.
ACC
We have used the official evaluation toolbox to report result 64 , where the valuation metrics of the DCASE2019 Task 1C challenge is the weighted average of the known classes and unknown classes. The ACC is the degree of closeness to true value and shows how much the predicted result is correctly predicted as the ground truth label, which is calculated by , where TP is true positive, TN true negative, FP false positive, and FN false negative, respectively. We follow the evaluation toolbox in the open set acoustic scene classification task and use the weighted average of unknown and known classes. where, is the mean accuracy of known classes, and is the unknown class accuracy.
Our classifier reports the softmax probability distribution of each category, and filters out unknown by threshold 0.5 59 , as in Eq. ( 5 ). Here, we found the performance has large gap, depending on the different threshold. That is why we have also provided AUROC metrics to evaluate the model.
Open set acoustic scene classification with proposed losses
First, we evaluate our proposed network with clustering loss: center, supervised contrastive loss. For comparison, we used the same encoder structure (Fig. 3 ) 59 , where the network consists of 5 convolutional, batch normalization and ReLU layers, and the feature map is down-sampled 2 times with stride 2. Finally, the global average pooling and fully connected layer are applied to map the discriminative feature into feature vectors with the size of 10. We train the classification model in 200 epochs with the batch size 32, and Adam optimizer 65 . The learning rate is set to 0.001 and is multiplied by 0.1 in 100, and 150 epochs. The hyper parameters , , and in Eq. ( 1 ) are set to 1, 0.1, and 0.1, respectively.
Table 1 shows the performance of DCASE2019 Task 1C. Using center and contrastive learning loss has improved the performance incrementally, compared to the threshold method 59 . Table 1 illustrates that adopting the losses leads to the best 74.6% in AUROC with 6% of performance gains. The contrastive loss improves the robustness of detecting the known and unknown classes by utilizing the distances and relationships between inter/intra-classes.
Self-supervised contrastive learning in open set scene classification dataset
Self-supervised learning is a kind of machine learning approach that tries to extract the inherent structure in the data by predicting the output of its own input. This is beneficial for learning rich data representations and improving the generalization ability in downstream tasks. In self-supervised training, to facilitate convergence, we used a large batch size of 1600 to create sufficient positive and negative pairs during the training. We follow the audio augmentation method in “ Effective data augmentation ” section to create positive/negative samples from train/leaderboard/evaluation in DCASE2019 Subtask 1C (no label given). The performance improves with the increase in training epochs 39 . Based on that, we trained the network with 5000 epochs. The temperature of contrastive loss is set at 0.1. The optimizer was Adam 65 with a learning rate of 0.01 and successively multiplied by 0.1 in 1000, 2500, and 3500 epochs.
In the second phase, we chose the best pretrained weights for the slightest loss on test data to fine-tune with training data (with label) 59 by supervised approach. The learning rate warming-up strategy was employed to reduce the impact effect in the early training. Therefore, we started with a small learning rate of 1e–6 for the initial 3 epochs. After that, the network continued to train with the model and center loss at learning rate of (0.0001, 0.0005). The weight of both center and contrastive loss was 10. We used the same encoder architecture in the open set acoustic classification task (Fig. 3 ). We froze the first and second Conv2d + ReLU + BatchNorm blocks, and retrained the rest. The network converged within 200 epochs.
Figure 4 shows the experiment result, and we use the same threshold 0.5 59 to detect unknown samples. The “baseline” to denote the weight of the network is randomly initialized and optimized by cross-entropy loss. “w.o. self-supervised” means the model is optimized by using our proposed loss, where even when the weight initializes randomly. We can conclude that the self-supervised backbone (w. self-supervised) works better to learn rich representations by contrasting positive and negative pairs. With the guidance of self-supervised contrastive loss, performance improvement in known classes and more accurate unknown detection accuracy (around 3%) could be achieved. The AUROC accuracy increased to 78.2% and 10% above the baseline.
In addition, Fig. 5 plots the confusion matrix of the classification result. It is evident that our proposed method works well in known scenes and is reasonable to find the unknown scenes. In Fig. 5 , confusion matrix shows that the model detected 2783 out of 3293 known scenes have been detected (Supplementary material Table 2 ), and obtained 42.3% recall, even though the training did not include the unknown scene. With the help of the self-supervised pre-trained weight, the model constructs a better structure to address intra-class compactness and inter-class dispersion.
Result of the GTZAN dataset
We conducted experiments on the GTZAN dataset. For self-supervised learning, we select the MagnaTagATune dataset, and augmented the audio chunks with a random combination of five types of transforms (“ Effective data augmentation ” section). We used the same encoder network as in Fig. 3 . The trained network with the Adam optimizer had a learning rate of 0.005, and the training was performed for 10,000 epochs with an automatically decaying learning rate by multiplying by 0.5 in each of 5000, 8000, and 9000 epochs. The batch size was set to 1024 to obtain more negative samples.
In the supervised fine-tuning phase, we used the GTZAN data with the self-supervised pre-trained weights. The warming up strategy with a related small learning rate of 0.0005 was applied to reduce the primacy effect of the early training samples. Adam optimizer optimized the network with a learning rate of 0.005. For the experiment of center and supervised contrastive loss in Eqs. ( 3 ) and ( 4 ), the weight parameters , , and are 1, 0.1, and 0.1, respectively. We applied random temporal axis crop augmentation to increase the number of samples, and we conducted training with 500 epochs.
Table 2 (rows 3 and 4) illustrates the performance with the self-supervised pre-trained weights. We obtained competitive results: the performance achieved significant improvement and achieves 82.7% of AUROC with the help of the proposed losses and self-supervised pretrained weight. The self-supervised approach closely imitates human behavior and automatically learns to distinguish between objects. When a small number of labels is inevitable, the model can immediately learn the discriminative features from the unlabeled data to produce reasonable results.
Figure 6 shows the similar matrix phenomena to the open set acoustic dataset. Our method recognizes the unknown genre, as well as classifying the known genres, and achieves an average 71.3% of precision, and 69.0% recall for unknown identification (Supplementary material Table 3 ).
Experiment result of labeling efficiency in the GTZAN dataset
The experiment result for the GTZAN dataset shows a similar effect (Fig. 7 ): the model trained with only a 90% subset of the data outperformed the model with 100% labeled data with random weight initialization to identify unknown samples. Self-supervised learning has the potential to improve performance and does not require excessive expenditure of labeling data. Figure 8 in the context of a self-supervised pre-trained model and shows TSNE visualization results after applying the model. It is obvious that the model can successfully build a clear boundary to classify various genres.
Experimental results of the Tori dataset
Finally, we performed the experiment on the Tori dataset (Table 3 ). It is obvious that the self-supervised pretraining efficiently learns the discriminatory features and obtains more than 10% improvement in terms of AUROC. Furthermore, the ACC results show great boosting with the help of self-supervised weight. Moreover, our proposed method achieves better performance in detecting the unknown samples and produces high performance.
Figure 9 has drawn the confusion matrix in terms of result (self-supervised + cross-entropy + center + contrast loss). The result demonstrates that the model performs well for known classes (Gyeong, Menali, Yukjabaeki Tori), and obtain a reasonable result to distinguish the unknown samples even without any semantic/attribute information (Fig. 1 ) for unknown class during the training. Note that the unknown Susimga Tori was misclassified into learned Gyeong Tori due to the similar pitch appearance, even though they have different characteristics in pitch progress. Because our model does not well capture the sequential behavior, confusion might be inevitable. | Conclusion and discussion
In this paper, we investigated the self-supervised learning approach to open-set sound event classification. The experiment result on DCASE2019 Subtask 1C, GTZAN and Tori dataset demonstrated that the self-supervised pre-trained model improves the robustness, and the model’s capability to detect unknown samples. The natural basic notion of open set recognition is that a compact cluster structure in the feature space for known classes facilitates the recognition of unknown classes by allowing a large room to locate unknown samples in the embedded feature space. Based on this concept, we proposed centrality and supervised contrastive losses, where the center loss tries to minimize the intra-class distance by pulling the embedded feature into the cluster center, while the contrastive loss disperses the inter-class members from each other. The experimental database is derived from a range of audio processing tasks, including encompassing genre classification, traditional vocal style differentiation, and acoustic scene classification. This highlights the ability of the method to generalize across diverse tasks. Moving forward, we aim to investigate various music information retrieval tasks to demonstrate the universality of our approach.
However, addressing some issues is still necessary: (1) The proper threshold is crucial to obtain good performance in unknown detection, but finding the optimal threshold is challenging. (2) There is no semantic information on unknown samples in the model training process (Fig. 1 unknown known class). The performance of unknown detection is complex to meet the requirements for industrial deployment. In the future, the semantic information of the unknown might be introduced during the training to improve the accuracy of identifying well the unknown samples. For example 16 , researchers applied a Siamese network to achieve open-set face recognition, including known and unknown samples (id label for each image is excluded) during the training. The network focused on learning the feature discrepancy between known and unknown samples rather than changing the meaning (id label) of the image itself. | Conclusion and discussion
In this paper, we investigated the self-supervised learning approach to open-set sound event classification. The experiment result on DCASE2019 Subtask 1C, GTZAN and Tori dataset demonstrated that the self-supervised pre-trained model improves the robustness, and the model’s capability to detect unknown samples. The natural basic notion of open set recognition is that a compact cluster structure in the feature space for known classes facilitates the recognition of unknown classes by allowing a large room to locate unknown samples in the embedded feature space. Based on this concept, we proposed centrality and supervised contrastive losses, where the center loss tries to minimize the intra-class distance by pulling the embedded feature into the cluster center, while the contrastive loss disperses the inter-class members from each other. The experimental database is derived from a range of audio processing tasks, including encompassing genre classification, traditional vocal style differentiation, and acoustic scene classification. This highlights the ability of the method to generalize across diverse tasks. Moving forward, we aim to investigate various music information retrieval tasks to demonstrate the universality of our approach.
However, addressing some issues is still necessary: (1) The proper threshold is crucial to obtain good performance in unknown detection, but finding the optimal threshold is challenging. (2) There is no semantic information on unknown samples in the model training process (Fig. 1 unknown known class). The performance of unknown detection is complex to meet the requirements for industrial deployment. In the future, the semantic information of the unknown might be introduced during the training to improve the accuracy of identifying well the unknown samples. For example 16 , researchers applied a Siamese network to achieve open-set face recognition, including known and unknown samples (id label for each image is excluded) during the training. The network focused on learning the feature discrepancy between known and unknown samples rather than changing the meaning (id label) of the image itself. | Sound is one of the primary forms of sensory information that we use to perceive our surroundings. Usually, a sound event is a sequence of an audio clip obtained from an action. The action can be rhythm patterns, music genre, people speaking for a few seconds, etc. The sound event classification address distinguishes what kind of audio clip it is from the given audio sequence. Nowadays, it is a common issue to solve in the following pipeline: audio pre-processing→perceptual feature extraction→classification algorithm. In this paper, we improve the traditional sound event classification algorithm to identify unknown sound events by using the deep learning method. The compact cluster structure in the feature space for known classes helps recognize unknown classes by allowing large room to locate unknown samples in the embedded feature space. Based on this concept, we applied center loss and supervised contrastive loss to optimize the model. The center loss tries to minimize the intra- class distance by pulling the embedded feature into the cluster center, while the contrastive loss disperses the inter-class features from one another. In addition, we explored the performance of self-supervised learning in detecting unknown sound events. The experimental results demonstrate that our proposed open-set sound event classification algorithm and self-supervised learning approach achieve sustained performance improvements in various datasets.
Subject terms | Related work
Open set recognition
The open set recognition task aims to detect an unexpected sample but learns only among known samples. It has primarily been applied to the image classification problem 21 , and extended to other fields. In this section, we explain the open-set recognition problem in two application areas:
Open set recognition in computer vision : In real-world recognition problems, it is inevitable to introduce unknown points, events, or observations in a collected dataset. Human error, low-precision machine operation, or the natural environment usually cause those unknowns (vibration, geomagnetic activity, illumination, etc.) and can indicate critical incidents. To address this problem, they 22 considers the relationship between unknown or out-of-distribution classes. In their observations, the correctly detected samples tend to present higher softmax probabilities than unknown samples once they have learned the practical model. So, the proper threshold can filter the unknown samples from the known—the OpenMax algorithm 23 to classify unexpected errors in an open set. The OpenMax algorithm exploits the activation vector before the softmax layer, in which the related classes are often responding to each other in the image but confused in the fake one. They 24 built the Classification–Reconstruction learning for Open-Set Recognition (CROSR) by utilizing the reconstructed latent representations and achieved robust unknown detection without compromising the classification accuracy of known categories. Reference 25 proposed the distance-based loss based on the assumption that the closer the known objects are to each other, the easier it is to find the unknown objects. Semantic segmentation is a more challenging task than classification because the algorithm has both to understand the representative information of the different pixels and to learn the interrelationships to build a clear object boundary. Reference 26 aggregated the abnormal pixels by leveraging the softmax prediction at the bottom of the traditional fully convolutional network. References 27 , 28 used a contrastive approach to construct the closer known object clustering and improved the ability to segment unknown pixels. Reference 29 created a memory bank to store contrastive features and solved the bottleneck that the contrastive loss requires significant computation and memory to obtain semi-hard negative pairs.
Open set recognition in the audio domain : Currently, many sociologists are emphasizing the computer vision field, yet very few works exist in the audio domain. A key challenge in open-set audio recognition is the collection of the dataset. Some remarkable research, such as rhythm pattern recognition, requires a professional expert to annotate the audio. Reference 30 designs an open set classifier using a support vector machine to map the data into high-level dimension features. Reference 31 applied the CNN architecture for a close-set classification set and other deep convolutional autoencoders (DCAES) for unknown detection. Reference 32 utilizes the support vector data description (SVDD) model to construct the adequate description of the known data boundary in the feature space and rejects the out-of-distribution samples. Reference 33 adopted a class-conditioned autoencoder to detect the unknown, assuming that the unknown has more significant reconstruction errors than the known samples.
Self-supervised representation learning
Data labeling is a challenging task that takes time and manual effort to complete correctly. The goal of self-supervised representation is to learn the rich represented features without vast amounts of labeled data 34 . In self-supervised learning, we should carefully design label-free pretext tasks and learn the internal features of the data through the supervised approach. The initial attempts started in the field of computer vision. Reference 35 designed the architecture for pair classification, where the input data is augmented and split into several patches, and the architecture is forced to predict the correct spatial configuration for the two pairs of patches. Reference 36 demonstrated that the learned features can generalize across samples and are suitable for various tasks. References 36 – 39 explored the efficiency of combining multiple pretext tasks to learn the deep neural network. However, maintaining consistency during the training takes time and effort. In summary, the algorithm needs to pick up valuable samples to efficiently train the network skillfully. To solve this problem 40 , 41 , built a dynamic dictionary with a queue and a moving-averaged encoder and achieved competitive results by reusing the “hard” samples from the dynamic dictionary. Reference 42 creates a Cross Quantized Contrastive learning technique that simultaneously learns codewords and deep visual descriptors in different images.
In music information retrieval (MIR) 43 , experts attempted to learn acoustic features by utilizing the natural synchronization between video and sound signal 44 . However, video represents 3-dimensional features, while sound needs to transform into 2-D embedded features. Designing the model carefully to learn both video and sound simultaneously is a complex and time-consuming task. Inspired by the improvement of computer vision, References 45 , 46 exploited the contrastive loss to extract the acoustic representation from the log-Mel spectrogram, and obtained a 30% performance gain in speech recognition. References 47 , 48 obtained inspiration from Reference 49 to adapt contrastive predictive coding in speech recognition. HuBERT 50 applied the clustering method to provide labels for the losses and learn the acoustic/linguistic model from the speech data itself. Reference 51 explored the advantage of a diverse set of feature representations by jointly training the conventional ResNet101 52 architecture with four pretexts. Other studies applied the learned representative information to various downstream tasks, such as sound classification 53 , 54 , and music generation 55 , 56 . Our study is also related to self-supervised representative learning. Firstly, we design a simple network structure with five convolutional layers and two down-sampling operations to learn representative features from a large number of unlabeled acoustic data by minimizing the self-supervised contrastive loss. Then, the learned network model is fine-tuned for classification task to evaluate the efficiency of representative features in detecting unknown audio events or their related patterns.
Dataset acquisition
To demonstrate the efficiency of identifying unknown classes, we conducted experiments on various acoustic datasets. We trained the network using a self-supervised approach on the DCASE2019 Subtask 1C or MagnTagATune dataset. In the supervised approach, each log-mel spectrogram patches produces one class label. Then, the open set acoustic scenes, GTZAN dataset, as well as Tori dataset, are used to evaluate the performance of our network in the open set task. In the supervised approach, each log-Mel spectrogram patch produces one class label. The input to the model is the log-Mel spectrogram patch, while the output is a classification label.
DCASE2019 subtask 1C
The dataset open set acoustic scene classification (DCASE2019 Subtask 1C) incudes ten major classes (Supplementary material Table 1 ) and other minor classes of audio records. The minor classes are environment sound events, which are labeled as “unknown”. The dataset was collected in 12 large European cities, in which the length of each record is 5–6 min, which is split into 10-s segments. The overall length of the dataset is 44 h with 15,850 segments (1450 unknown classes + 14,400 of 10 major scene classes). Each segment is recoded in a mono channel and sampled with a 44.1 kHz sampling rate. The limited number of training samples causes the difficulty of this task without sufficient variety, so it may produce poor generalization in trained models.
We followed the same approach as in 59 to process the DCASE2019 Subtask 1C and did not use any other external data resource during the training. The classifier was only trained with known categories, and tested with known + unknown data. The open set acoustic dataset (DCASE2019 Subtask 1C) has train/leaderboard/evaluation folders, where only the train folder provides a corresponding label for each sound. For a fair comparison, we split it into train/validation/test as Reference 59 . We obtain the log-Mel spectrogram of 10-s segments with a window size of 2048 after STFT and a hop length of 460. To facilitate the calculation, the pad log-Mel spectrogram features with zeros, and the resultant feature size is 64 × 960, where the first dimension is the number of Mel-scale bins, and the second is the number of frames on the temporal axis. We calculate the mean and standard deviation for each bin across all the data samples and normalize the 2-D feature by the mean and standard deviation for the training, which is a very significant step for successfully training the network. During the training, the model takes the log-Mel spectrogram as input, and produces the probability of scene class. During the test stage, the unknown was filtered by the manually decided threshold, as in Eq. ( 5 ).
MagnTagATune dataset
We select the MagnTagATune to create a self-supervised training dataset for learning the pre-train model in the task of GTZAN and Tori experiment. The MagnaTagATune dataset 60 contains 25,863 audio chunks extracted from 5223 songs, 445 albums, and 230 artists. Each chunk of the music file is mono-recorded with a 16 kHz sampling rate. It spans a broad range of genres, making it suitable for pretraining. First, ensure the each chunk has the same length of 30 s by using replication padding. Then, the transforms of the raw audio using various augmentation methods (“ Effective data augmentation ” section) to create the positive pairs. To convert the audio sequence into log-Mel spectrogram with the same setting of DCASE2019 Subtask 1C. During the training, the algorithm automatically extracts the positive and negative samples from the augmented data, and the model minimizes the self-supervised contrastive loss (Fig. 2 a) to learn the rich representation of those samples.
GTZAN dataset
We evaluate the GTZAN dataset in the fine-tuning phase. The well-known GTZAN dataset is deserving of the MNIST 61 for music. The researchers utilized the dataset 62 in over 100 articles in 2013 61 . Each song records the mono channel for over 30 s. It is popular because the concept of music genres and single-label classification is easy, simple, and straightforward. However, there are too many issues with this dataset. (1) First, the audio quality varies by track. (2) Heavy artist repetition is often ignored during dataset split. (3) The label is not 100% correct. To solve this problem and make fair comparison 60 , we use a cleaned version and split – “fault-filtered” 63 , which divides the dataset into train, validation, and test.
The network (Fig. 2 a) learns rich represented features from augmented data extracted from the MagnaTagATune dataset. In supervised learning, the model is fed with the training set of the “fault-filtered” GTZAN dataset and predicts the probability of the genre. The unknown class is assumed to be metal. We removed the samples of metal class from the training data and evaluated both known and unknown in the test data.
Tori dataset
In the traditional Korean folk song, the singing style is different from province to province, which is called Tori. Each Tori has its scale and musical tone. The famous Tori include Gyeong Tori (경토리), Menali Tori (메나리토리), and Susimga Tori (수심가토리). For example, the Gyeong Tori mainly comprises the five-note system (sol, la, do, re, mi) and gives a bright and light impression to the audience. We collect the four types of Tori (Gyeong Tori, Menali Tori, Susimga Tori, Yukjabaeki Tori) with a total of 146 chunks with a 22,050 Hz sampling rate. The length of each chunk varies from 1 to 10 min. We clip each chunk into small uniform segments with a length of 10 s, which results in Yukjabaeki Tori 1428 segments, Menali Tori 1073 segments, Gyeong Tori 985 segments, and Susimga tori 131 segments. We chose the Susimga Tori as the unknown samples, and the other three types of Tori are split into 4:1 for train and testing. The window size of STFT is 2048 for each segment to generate the log-Mel spectrogram with the size of 128 × 960. In the training, the model takes the spectrogram feature as input and approximates the label (Gyeong, Yukjabaeki, or Menali Tori).
Supplementary Information
| Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-023-50639-7.
Acknowledgements
This work was funded by the National Research Foundation of Korea (NRF) under the Development of AI for Analysis and Synthesis of Korean Pansori Project (NRF-2021R1A2C2006895).
Author contributions
J.Y. designed the study, conducted the experiments, and prepared the original manuscript. W.W. prepared the dataset. J.L. supervised the research and edited the manuscript. All authors have read and agreed to the published version of the manuscript.
Data availability
The authors declare that all training and testing data and codes supporting this study are available from the first author upon reasonable request. All other data supporting this study are available within the article. The dataset of DCASE2019 Subtask 1C, GTZAN, MagnaTagATune is available in ( https://dcase.community/challenge2019/task-acoustic-scene-classification , https://www.kaggle.com/datasets/andradaolteanu/gtzan-dataset-music-genre-classification , https://mirg.city.ac.uk/codeapps/the-magnatagatune-dataset ), processed dataset of tori is available at https://drive.google.com/drive/folders/1QFP2IEtQU3i5be1tvVKLBeSCEbYMr91I?usp=share_link .
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:56 | Sci Rep. 2024 Jan 13; 14:1282 | oa_package/76/1f/PMC10787752.tar.gz |
PMC10787753 | 38218989 | Introduction
There is no need to dwell on the global obesity epidemic in 2023 and its many connections with environmental issues. It has been estimated that the economic impact of overweight and obesity will grow from 2.19 to 3.3% of gross domestic product in > 160 countries by 2060 1 . While the development of obesity inherently involves a waste of natural resources, the health consequences of obesity also have severe repercussions, and bariatric surgical procedures, in their current configuration, undoubtedly have their share of responsibility. Indeed, obesity surgery itself generates a significant proportion of global greenhouse gas emissions (GHGe) to which operating rooms (OR) have been shown to contribute greatly 2 – 4 . The time has now come for bariatric specialists to take an in-depth look at the relationship between these issues and how they should be addressed. | Material and methods
Following the steps described in previous studies 2 – 5 and taking into account local possibilities, we have identified several measures that could be implemented in our center and its bariatric component, with a combination of approaches: (1) original measures pertaining to recycling and maximizing instrument use; (2) improvements in waste management that could have been implemented earlier, as other centers have done; (3) relevant measures that should be implemented in the near future but have not been so far because of more or less temporary conditions at the local level or due to the current health regulations; (4) decreased use of anesthetic gas through new ways. The methods are listed in Table 1 .
Some measurements have been suggested based on available data, i.e. the DD UK tables 7 . In this regard, it appeared that calculations concerning expensive and disposable instruments that are widely used in bariatrics (energy devices, staplers) were most meaningful, while others remained elusive. | Results
Difficulties encountered to implement the study
Sparing GHGe is a dynamic and challenging process that currently requires finesse and local negotiation. Not everything is possible in terms of packaging, sterilization process, or reprocessing. - Recycling surgical instruments was our core initiative (Figs. 1 and 2 ), since it has been pointed out that there was a high potential in using less surgical tools and/or reusing them 2 , 3 . Yet, recycling turned out to be tedious, and not cost-efficient: each instrument required 20–40 min to be dismantled, while only 30% of their two components (plastic and metal) could be recycled. As a market item, the recycled material has a highly volatile benefit that is poor for the time being, e.g. at best at the end of 2022 1t Inox = 800–150 € for transportation, other metals = 150€/t, plastic = 150–40 €/t for compacting, not including storage. Moreover, some items contain traces of rare-earth elements that are difficult to recycle. An economic equilibrium is not foreseeable in the next future in this respect, but the initial design of such instruments could be revised in order to facilitate their dismantling, for example in decreasing the glued parts. Some of our initiatives were indeed in line with current environmental recommendations, which had not been fully implemented in our hospital: packaging and other waste management, laundry, etc. Recycling anesthetic gas canisters represents a recent initiative that has encountered issues of compatibility with current breathing machines. It also conflicted with another recent tool, the AGSS, which conveys anesthetic gases from the OR to the roof to be released in the outside atmosphere.
Bias
Global GHGe attributed to the health sector ranges from 6 to 10%; these discrepancies arise from the parameters considered in calculations with on the one hand strictly on-site calculations and, on the other hand, those considering the whole scope of GHGe, e.g. the supply chains, which may account for 59% of these emissions 2 , and the emissions due to staff/patient consumption and transportation, research and development for a given device, etc. Theoretically, only studies using the life cycle assessment (LCA) methodology for all items should be considered for analysis 3 . Careful attention should be paid to the boundaries, because otherwise any study would remain speculative, as is the case when dealing with the scope of product inventory for a given functional unit, e.g. a surgical procedure. Confusion may arise with regulatory measures that are imposed or advised ahead of common practice. A typical example concerns the anesthesia practice and the gases that are commonly used and/or locally excluded. Advocating reusable (e.g. cotton-made) gowns and surgical drapes is relevant, as is reusing/reprocessing instruments, but, depending on each location, it is possible that this measure is encouraged or, on the contrary, that it goes against recommendations regarding infection control. The discussion about energy sources covers a variety of situations and energy mixes, for instance in the French Rhône-Alpes area, where this study has been conducted, > 70% of the electricity comes from nuclear energy, which has a very favorable GHGe profile, whereas parts of Germany still rely on coal-electricity supply. The example of anesthetic gas emission is compelling: the AGSS has been introduced to protect the personnel from gas emissions inside the ORs, but it also consumes energy and releases GHGe in the atmosphere. The retrieving canister solves this problem, but at an extra-cost that had not been anticipated. Advances in surgical techniques can also be challenging, the typical example being robotic surgery. In a study conducted in the field of gynecology by Woods et al., the solid waste generated and energy consumed by robotic surgery represented 40.3 kg CO 2 eq./patient vs. 29.2 for laparoscopy (+ 38%) and 22.7 for laparotomy (+ 77%) 10 . Returning to laparotomy is barely imaginable, and the robotic upgrade is far from insignificant as it represents an asset in many centers 11 . In the study by Thiel 5 pertaining to laparoscopic hysterectomies, the difference from baseline in GHGe resulting from reprocessing was 9%. The difference resulting from minimizing the instruments was computed at 46%, most of them being seemingly disposable. In our model, we included the recycling of disposable components (staplers and energy devices), which represent the core of some operations (e.g. > 1500 € in a sleeve gastrectomy), and we took into account the emissions resulting from sterilization (autoclave). Other limitations: we did not take into account the whole scope of the surgical process, from pre-operative measures to post-operative care, including drug prescription; the computer and email activities have not been included; the water footprint has not been modified; an effort has been made regarding the surgical shirts and trousers for the staff that are no longer disposable, but not regarding the surgical drapes and gowns. Likewise, the duration of the procedures was not relevant, because these are standardized interventions with a limited set of variations (Table 4 ), the operator and team being the same.
During the first study period, i.e. the control period, from October 1, 2021 to December 31, 2021, 59 operations were performed, while during the second study period, i.e. the action period, from January 1, 2022 to March 31, 2022, 56 operations were performed (Table 4 ).
There were neither complications in these two series of interventions, nor intra-operative adverse events that could be attributed to one or the either; the operative time was not different in the two series, regarding the all scope of procedures. None of the measures taken represented an actual “waste of time”.
For 1 laparoscopic sleeve gastrectomy = 12.3 kg CO 2 eq. saved, for 1 band removal = 5.9 kg CO 2 eq. saved, and approx. the equivalent for 1 Roux-en-Y gastric bypass, which amounts to 100 kg CO 2 eq. saved in 3 months, and by extrapolation, 2400 kg in one year (Table 5 ).
Estimation of our ECO-SCORE
Based on Table 2 , we assessed a global SCORE at C+: waste 3+, recycling 3+, diminishing surgical instruments 4−, anesthetic 2, energy 4, other 1. | Discussion
Obesity, obesity surgery, and environment
Obesity per se is a major contributor to GHGe, in relation with the carbon footprint of food production and associated supply chain. The genesis of obesity thus significantly impacts GHGe worldwide. Regarding metabolic food waste (MFW), which is defined as the amount of food leading to excess body fat, Europe and North America were found to display the highest values for all three MFW footprints (i.e. carbon, water, and land footprints), being 14 times larger than in South Asia and South-East Asia 12 . The modern food environment, i.e. food availability, also strongly contributes to this genesis in any country 13 .
One would assume that a more or less significant part of this waste may be compensated to a large extent by weight loss, notably achieved by bariatric procedures. While there is no substantial evidence for this, a large body of arguments point to reduced medical costs after surgical weight loss, with therefore a favorable impact on the environment 14 .
Although bariatric procedures marginally contribute to lower resource consumption once patients have achieved sustained weight loss, an argument can be put forward in favor of such interventions, therefore justifying coverage by health insurances: the markedly increased well-being of the obese population 15 . In other words, fighting the stigma that is often associated with obesity is beneficial to this population, since this stigma prevents obese patients from gaining easy access to treatments such as bariatric surgery 16 .
Two current trends may adversely affect this reasoning: (1) the relatively aggressive approach towards Stage I obesity [body mass index (BMI) 30–35], with obesity surgery claiming success in those patients when affected by a comorbidity, typically type 2 diabetes 17 , hence the assertion that metabolic surgery should be strongly promoted and offsets the costs of treating such conditions; (2) the relative extension of robotic surgery in this field also significantly impacts GHGe, as previously shown in gynecological surgery 10 or other types of surgery 11 .
Looking for opportunities and compromises
Other strategies pursuing common goals are currently being considered. On the one hand, “green surgical innovation” 18 suggests that evaluating new surgical devices or new surgical options in general, such as robotics 10 , 11 or digitalized options, could benefit from the strict analysis of their carbon footprint, which thus determines whether a new strategy/device should be implemented or not. This is questionable, since innovations and environment may remain compatible. On the other hand, others rightly point out that surgical issues have boundaries that go beyond the strict perimeter of the OR 19 , or even suggest a much broader move that encompasses several items in order to build sustainable and resilient surgical systems, possibly at the level of a geographical area, e.g. the West Pacific region, including infrastructure, service delivery, finance, information systems, health workforce, and governance 20 . One may object that health systems are closely interconnected throughout the world, for example when it comes to the workforce, and that such definitions may be vague enough to hamper real and coordinated efforts.
As noted by Rizan et al. 3 , the numbers are difficult to interpret because the various studies have different frameworks and use different references, with the authors placing emphasis on the various choices that can be debated, e.g. whether or not including anesthesiology, energy mix, life cycle analysis, etc. Hence, we suggest acting upon what is currently within reach at a given time and evaluating the progress that can be achieved at various levels. We propose to start from a given situation in a hospital and try to improve different scores, thereby contributing to a more global effort including recycling, re-processing, eliminating single-use items whenever possible, energy saving, and minimizing instrument use and anesthetic gas. It is common to feel that others should make environmental efforts before we do, or in other words, that other fields have a more detrimental impact on the environment than our own. In view of the substantial contribution of the Scope 2 (energy) to GHGe as compared to the others, one may claim to be powerless or favor green-washing options and focus on good intentions rather than real actions. Likewise, many choose to blend environmental issues with social issues when presenting results, which is politically relevant (at least regarding the so-called “social and environmental responsibility”), but probably scientifically irrelevant in the medical field.
Yet, it is interesting to look at other fields whose environmental strategy is nowadays being questioned, such as the automobile industry, fashion industry, construction, computer software and internet, etc. How fast are efforts being made and should we follow the same path? What kind of pressure is applied in each case? These are difficult questions, and it may be best that everyone acts on their own behalf, regardless of what others do.
For now, we overlooked the final aspect of GHGe in patients at the very end of the surgical process, i.e. considering the economical long-term benefits of weight loss, which offset the initial costs in many studies. For instance, the decrease in drug costs has been evaluated: In a meta-analysis performed by Lopes et al. in 2015, the mean reduction in total drug costs was estimated at 49.8% over a follow-up duration of 6–72 months after bariatric surgery 21 . Yet, such studies are lacking for GHGe, and we need benchmarks. One study has been conducted in the field of esophageal reflux surgery, showing that the cost–benefit ratio was not favorable up to 9 years after surgery 22 . Further studies are warranted to assess the benefits of GHGe reduction in the bariatric field.
Lastly, we addressed what could be called the “obesity debt”, i.e. the food waste associated with overweight and obesity, which amounts to 140 M tons/year according to Totti et al. 12 . This does not impede the efforts towards weight loss, on the contrary, but it could be an incentive to favor less energy-consuming methods, e.g. endoscopic solutions rather than typical laparoscopic surgical options 23 . While an ultimatum like the EU ban of combustion-engine cars by 2035 is barely conceivable, it makes sense to promote incentives to develop less impacting technologies such as endoscopic bariatric methods, perhaps associated with new drugs (GLP-1 receptor agonists).
There is no reason why bariatric surgery would be more or less environmentally friendly according to BMI range; yet adding more patients to the surgical workflow seems unfriendly to the same environment, particularly if alternative and sound options exist for those patients (drugs, endoscopy, etc.), which is the case for lower BMI patients, and even for selected higher BMI patients and/or unwilling to undergo surgery. To put it differently: such treatments, that are explicitly non-surgical, entail less GHGe, and are therefore more likely to concern this range of BMI (30–35). However, when surgery is performed in these patients, it does not mean more GHGe, but those could have been spared if surgery had not been the primary option. | Conclusion
Is it cost-effective to try and diminish GHGe (and other items) in an operative setting? Does it affect surgical outcomes? Basically, the efforts we can make without further delay are mostly cheap or affordable, more strategic ones (e.g. shifting to less consuming operations) represent a different issue that would require funding/incentives and consensus. The efforts that we tried are allegedly quite doable even if facing reluctance; they did not and they should not impact the medical/surgical outcomes at all. | Obesity is a growing issue worldwide, whose causes and consequences are linked to the environment and which therefore has a high carbon footprint. On the other hand, obesity surgery, along with other procedures in surgical suites, entails environmental consequences and responsibilities. We conducted a prospective comparative study on two groups of bariatric interventions (N = 59 and 56, respectively) during two consecutive periods of time (Oct 2021–March 2022), first without and then with specific measures aimed at reducing greenhouse gas emissions related to bariatric procedures by approximately 18%. These measures included recycling of disposable surgical equipment, minimizing its use, and curbing anesthetic gas emissions. Further and continuous efforts/incentives are warranted, including reframing the surgical strategies. Instead of comparing measurements, which is difficult at the present time, we suggest defining an ECO-SCORE in operating rooms, among other healthcare facilities.
Subject terms | Operating room and environment
Depending on calculations, the healthcare sector is responsible of up to 10% of total GHGe (USA, UK) 2 , 3 , which should raise awareness and call for urgent action 4 . It has been shown that ORs were a major source of GHGe worldwide, representing up to 60% of the emissions of a given hospital 3 ; their carbon footprint has been estimated at approximately 184 kg per intervention, which corresponds to the weekly consumption of a 4-person family in the western world 4 . Depending on the operations, locations, and calculation methods, this footprint varies from 6 to 814 kg 3 . Among others, surgical operations account for 21–30% of hospital waste, and electricity alone represents more than 60% of the total 2 , 3 . There are three different scopes of GHGe in ORs (Scope 1: anesthetic gases; Scope 2: electricity use and heating; Scope 3: surgical supply chain and waste disposal), and it has been recommended to act separately on each of them by MacNeill et al. 2 .
According to a meta-analysis by Rizan et al. 3 , the carbon footprint of surgery can be reduced by improving the energy efficiency of ORs, using reusable or reprocessed surgical devices, and streamlining common procedures. While multiple approaches need to be combined, some limits have been encountered: commonly implemented means, such as recycling surgical waste, can result in a reduction in GHGe of less than 5% 5 .
Acting upon the elements that contribute to this situation proves difficult because of the great heterogeneity of potential measures that can be implemented in various locations 2 . In a study by Thiel, the total carbon footprint generated by a laparoscopic hysterectomy was estimated at 562 kg and could be reduced to 285 kg and even to 125 kg if anesthesia was removed from the equation 5 . Anesthetic gas actually represents a very significant part of these emissions, as illustrated for example in 2020 by Ryan et al., who emphasized the global warming potential of sevoflurane 6 .
Relevant leads have been suggested in order to decrease GHGe 5 : (1) minimizing the materials used in the OR, (2) maximizing instrument reuse and/or single-use instrument reprocessing, (3) moving away from some heat-trapping anesthetic gases, (4) reducing off-hour energy use in the OR. As a major user of those facilities and materials, bariatric surgery is a good example of the potential for savings in this regard.
Hence, we should answer the following questions: Can we clear a path towards less GHGe in a regular operating room, on a larger scale than currently and in a timely manner? Can we elaborate a consensus regarding the best available options, despite the complexity of existing guide-lines, various focuses and regulations? Do current habits take a reasonable path in terms of recycling, sparing resources, and finally looking for less energy-consuming procedures? How do the specificities of bariatric/metabolic procedures impact this reasoning? Ultimately, can we reduce the GHGe in the theatre by more than 10% for the time being?
Building a surgical ECO-SCORE
According to some reports for the industry in general, up to 60% of saved emissions may be obtained through inter-industrial cooperation, including 35% of recycling and of 5% energy consumption. These figures are actually difficult to extrapolate to the health sector, which displays great differences from one hospital or group of hospitals to another, as exemplified in Mac Neil’s paper 2 .
Notwithstanding, hospitals could share mutual standards in many respects and despite their differences, from anesthetic gas to instrument reusing/reprocessing, with specific goals. Although some hospitals or groups of hospitals issued environmental or “responsibility” claims, it is difficult to assess whether or not a real green strategy has been implemented.
We suggest an ECO-SCORE including several indicators that would demonstrate a favorable trend starting from a given setting and would encompass the current variations among hospitals, areas, countries, etc. It seems reasonable to address several levels of objectives according to the different settings and to include progressive and planned measures in order to reach a valuable ECO-SCORE, starting from various baselines. We propose to draw inspiration from the Open Food project 9 , with grades ranging from A to F. Sequential strategies may be used. Balancing factors are often connected to local circumstances and leave room for evolutionary strategies: Table 2 .
Several issues should be addressed in order to avoid misunderstandings: Evaluating what can be implemented in a given location at a given time (i.e. ECO-SCORE in a context); examples: particular energy mix, recent initiatives taken at the local level or for instance at the level of a group of hospitals, such as waste policy. Addressing the strategic choices that have been or may be included, defined at a regional, national, or multinational level; examples: shift from bariatric surgery to bariatric endoscopy, promotion and/or prohibition of specific procedures. Such choices have an environmental background, but can face controversy at a given time, mostly on scientific grounds: for example, the shift to endoscopy could be implemented if and only if a longer duration of effect can be demonstrated. Pondering the potential arbitrations: robotic surgery, Enhanced Recovery After Surgery (ERAS), national or international guidelines, post-operative protocols etc. Estimating the factors that are overlooked because they fall into other categories of GHGe. These are typical and explain why figures may not match. A few examples: expenses for transportation, food, etc. for the staff and other employees in hospitals, research, education and training, choice of instruments (disposable or not).
We suggest the items of the ECO-SCORE may be accounted according to Table 3 .
Statements
All procedures were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants for whom identifying information is included in this article. The study has been pre-approved by the Scientific Committee (Scientific Advisory Board) of the Private Hospitals Units from VIVALTO-France, GCS Merit Vivalto, as a licensing Committee. Figures 1 and 2 have been taken by the author (J Dargent), and therefore need no permission. They may be published under a CC BY open access license; permission is granted to publish them in all format i.e. print and digital. | Acknowledgements
Nisrine Nait, VEOLIA Lyon, France. ONYX Auvergne Rhône Alpes, 5 Rue des Frères Lumière, 69680 Chassieu.
Author contributions
J.D. wrote the manuscript text and reviewed it.
Data availability
Data will be made available on reasonable request; contact: Jerome Dargent, [email protected].
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:56 | Sci Rep. 2024 Jan 13; 14:1252 | oa_package/c5/20/PMC10787753.tar.gz |
PMC10787754 | 38218992 | Introduction
Hot water balneotherapy in water rich in minerals and bicarbonate has traditionally been used around the world 1 , and various studies have been performed to investigate the effectiveness of balneotherapy in trauma 2 , skin diseases 3 , and musculoskeletal disorders 4 – 7 . In recent years, studies have reported positive effects of balneotherapy on sleep and metal stress 4 , 8 – 10 .
Mental stress was believed to affect biological homeostasis, such as emotional regulation and hormone secretion, and recently, it has become clear that it also causes sleep disorders, depression, and cardiovascular disease and increases susceptibility to infectious diseases and cancer 11 – 13 . In Japan, the morbidity rate due to mental stress has been continuously increasing since the period of high economic growth in the late 1950s, with over 60% of employees reported to be experiencing mental anxiety and stress 14 .
The suicide rate in Japan is by far the highest among the Group of Seven industrialized nations, and the leading causes have been reported to include anxiety and depression related to mental stress in the workplace 15 . Despite a modest decrease in the number of suicides after the enactment of the Basic Act on Suicide Prevention by the Japanese Government in 2006, the annual number of suicides remains high at 20,000 16 . In 2015, the Japanese Ministry of Health, Labour and Welfare mandated that workplaces conduct occupational stress checks on their employees 17 , 18 . Nevertheless, the effectiveness of this measure is limited because of the small number of industrial physicians affiliated with workplaces and the lack of methods for successfully coping with mental stress.
The COVID-19 pandemic deeply affected the lives of people globally, not only because of the physical health risks of infection, but also because of the considerable mental stress related to the significant pandemic-related lifestyle changes, which in turn posed a threat to mental health and increased suicide rates 19 . Against this background, the establishment of specific coping strategies to relieve stress is considered to be one of the most critical societal challenges 20 .
Previously, we performed a study on a neutral bicarbonate ionized water (NBIW) bath tablet. In contrast to hot springs, in which the water composition varies depending on area and weather conditions, the quality of our bathing tablet is stable, and people can enjoy balneotherapy at home without having to visit a hot spring. Our previous study showed that bathing in NBIW tends to increase blood bicarbonate ion concentrations and increase blood flow by increasing expression and phosphorylation levels of endothelial nitric oxide synthase and levels of nitric oxide (NO) in femoral vascular tissue, effects that are associated with enhanced blood flow 21 . Moreover, in a preliminary randomized controlled trial conducted on the basis of those experiments, we found that the effects of higher body temperature upon waking and one hour after bathing tended to occur earlier in the intervention than in the control group. When a mood states profile was used to evaluate the effectiveness of bathing in NBIW on stress improvement, vigor/activity scores were improved 21 . The earlier study did not evaluate effects on immunity, but the positive effects of balneotherapy on immunity have received much attention in recent years, and many studies have evaluated them 6 , 7 , 22 . Therefore, we conducted a randomized, open-label, crossover study to investigate the effect of NBIW on sleep, mental stress, and immune function. In particular, to more accurately evaluate the efficacy of NBIW in improving mental stress, we assessed sleep quality, which correlates with stress, not only subjectively with a questionnaire-based analysis, but also objectively with an activity meter. | Materials and methods
Participant eligibility and recruitment
This randomized clinical trial was conducted and reported in compliance with the Declaration of Helsinki and the CONSORT Statement, respectively. It was approved by the Chiyoda Paramedical Care Clinic Ethics Review Board (IRB No.: 15000088; Approval No.: 22031805), and the protocol was registered with the UMIN-CTR (UMIN000047429) (07/04/2022). All participants provided written informed consent to participate in the study. The study was conducted between April 13, 2022, and August 11, 2022, and no major changes were made to the protocol during the study. Data were collected at Chiyoda Paramedical Care Clinic and other institutions, and statistical analyses were performed by the contract research organization, CPCC, Inc.
Participants were recruited by means of widespread dissemination of information at the study sites. Participation was voluntary. The inclusion criteria were as follows: (a) men and women aged 30 to 60 years at the time of informed consent, (b) individuals who reported experiencing daily stress, (c) individuals who were dissatisfied with their sleep, (d) women who were postmenopausal or had a regular menstrual cycle of 28 to 32 days, (e) individuals who were able to take a bath once a day during the study period, and (f) individuals who were able to receive a full explanation of the study, understand its contents, and give their written consent. The exclusion criteria included 14 items (see Supplementary Information 2 ) in consideration of their impact on the efficacy evaluation and the safety of the participants. The major exclusion criteria included individuals taking medicines or supplements that could potentially have an impact on the study; individuals with irregular lifestyle rhythms, such as shift workers; individuals who may experience loss of consciousness due to seizures; individuals who have experienced coarse oily skin due to bath salts; and individuals who had donated a certain amount of blood before the start of the study. Study investigators and others enrolled study participants who gave written informed consent, satisfied the inclusion criteria, did not meet the exclusion criteria, exhibited no clinical abnormalities based on the results of the screening test, and were deemed eligible for participation in the study, taking into account the results of the BJSQ 23 and PSQI-J 24 . The allocation manager, who was independent of the study and analysis sites, used stratified randomization to allocate eligible participants to 2 groups based on age, sex, and BJSQ and PSQI-J scores at the time of the screening test. The parameters used for allocation were analyzed by an unpaired t test or a Wilcoxon rank sum test, and it was confirmed that there was no significant difference between the two groups.
Subsequently, an allocation list was prepared with group names and participant identification numbers.
Study design and intervention
This study was designed as a randomized, open-label, crossover trial with 2 intervention periods, each lasting 4 weeks, with a 1-week washout period between. The duration of the study product use was set to 4 weeks because another study showed an effect after 4 weeks of study product use, and the washout period was set to 1 week by referring to that study 21 . The study findings at the start of the pre-observation period, 1 week before the start of the intervention, were used as the baseline data.
The study bath salts were NBIW tablets (HOTTAB Inc.) that were pH adjusted to release the maximum amount of bicarbonate ions when dissolved in warm tap water. The composition of the tablets is shown in Table 8 . The tablets were white and weighed 15 g each; once participants had opened the package, they were asked to store the tablets in a cool, dark place with low humidity.
Participants were divided into 2 groups: after the first 1-week pre-observation period, one group bathed without NBIW tablets (control-NBIW group) and the other bathed with NBIW tablets (NBIW-control group). Then, after a 1-week washout period, each group participated in the other intervention. During the NBIW bathing period, participants were instructed to add 4 NBIW tablets to their bath (amount of water: 180 L) once a day and to enter the bath at least 15 min after adding the tablets. During the control period (including the pre-observation and washout periods), no NBIW tablets or other bath products were to be added to the once-daily baths. The bath water temperature was to be kept at 37 °C to 41 °C, and the bath was to be a full-body bath (i.e., shoulder depth) lasting a minimum of 20 min. Participants were permitted to drink room temperature water while bathing but were not permitted to drink cold water or read. The bath water was to be changed daily.
Outcome measures
Primary endpoints
Primary endpoints were mental stress assessed with the BJSQ (simplified version consisting of 23 questions 23 , subjective sleep quality assessed with the PSQ-J 24 , 32 , and sleep measurements assessed with an activity meter (actigraphy) 57 , 58 . To obtain objective sleep variables, the small, lightweight waist-worn actigraphy device MTN-221 (Acos, Co., Ltd.) was used. Participants wore this device on their waist all times except when bathing. Sleep parameters were recorded for 3 weeks, from 5 days after the start of the intervention through to 2 days before its completion. Sleep and wakefulness were analyzed by SleepSign ® -Act Ver. 2.0 (Kissei Comtec Co., Ltd, Matsumoto, Japan), which relies on an algorithm that uses the activity and posture data recorded by the actigraphy device in a series of linked calculations 59 . This algorithm was used to evaluate bedtime, total sleep time, sleep latency, wake after sleep onset, sleep efficiency, time out of bed, and bed out latency.
Secondary endpoints
POMS2 60 , 61 was evaluated as the secondary endpoint of mental stress. Among immunological tests, lymphocyte subset, neutrophil phagocytosis, and NK cell activity were analyzed by flow cytometry (BD, FACS Caliber). Interleukin (IL)-6, IL-8, and IL-12 in blood were measured by enzyme-linked immunosorbent assay (LBIS Human IL-6 ELISA kit, LBIS Human IL-8 ELISA kit; FUJIFILM Wako Pure Chemical Corporation: Human IL-12 ELISA kit; abcam).
The BJSQ, PSQ-J, and POMS2 were assessed at the time of screening and at the end of each intervention period, and lymphocyte subset analyses were performed at the beginning of the pre-observation period and the end of each intervention period.
In addition, anthropometric measurements (height, weight, body mass index, abdominal circumference), blood pressure and pulse rate, general hematologic tests, eosinophil count, and white blood cell counts were performed. Furthermore, from the day after the start of the pre-observation period until the end of the second intervention period, participants completed a daily log on a dedicated website that included information on whether or not they had used the study product, whether or not they had taken baths, the duration of full-body bathing, the time they went to bed and woke up, their diet, and their intake of medicines and supplements.
Statistical analysis
Microsoft Excel (Microsoft Corporation), IBM® SPSS26 (IBM Japan), BellCurve for Excel (Social Survey Research Information Co. Ltd.), and G*Power 3.1 62 were used for tabulation, graphing, and analysis.
Data were analyzed with appropriate methods depending on normality, distribution, or correspondence. A paired t test was used for parametric data, and Wilcoxon signed-rank test was used for non-parametric data. In the case of parametric methods (paired t test), the effect size was calculated as Cohen’s d, and in the case of non-parametric methods (Wilcoxon signed-rank test), it was calculated as r. | Results
Study population
A flow diagram of the study population is shown in Fig. 1 . Of 41 potential participants who understood the content of the study and provided written informed consent, 39 completed the screening test, which comprised a simplified version of the Brief Job Stress Questionnaire (BJSQ) consisting of 23 questions 23 , the Pittsburgh Sleep Quality Index Japanese version (PSQI-J) 24 , age, mean hours of sleep, height, weight, body mass index, blood pressure, and pulse rate. Of those 39 individuals, 25 met the inclusion criteria, did not meet the exclusion criteria, and exhibited no clinical abnormalities based on the results of the screening test, which included the results of the BJSQ and the PSQI-J. This study used a crossover design comprising NBIW bath tablet bathing (NBIW) and standard bathing (control). In one group, control bathing was conducted first (control-NBIW, n = 12), while in the other group, NBIW bathing was conducted first (NBIW-control, n = 13). All 25 individuals participated in and completed the study and were eligible for inclusion in the efficacy analysis. Outcomes of each parameter were measured before allocation and at completion of the intervention. After completion of the whole study period, a statistician independent from the study group performed the statistical analyses.
Background characteristics
Participant background characteristics are shown in Table 1 . Analyses of background characteristics conducted when participants were allocated to groups after the screening test revealed no significant differences in age, mean sleep duration, physical measurements, physiological tests, PSQI-J, or BJSQ between the 2 crossover groups, i.e., control-NBIW and NBIW-control. The percentage of NBIW tablets used (% ± SE) in relation to the number of days of use was not significantly different between the groups (control-NBIW, 99.70% ± 1.03%; NBIW-control, 100.00% ± 0.00%). Bathing time during the intervention was calculated from the diary data recorded by each participant, and results confirmed that there was no significant difference in bathing time between NBIW bathing and control bathing.
Outcomes
Stress
The results of the BJSQ are shown in Fig. 2 . Comparisons of percentage changes from baseline in 3 stressor categories (A, Job Stressors; B, Stress Reaction; and C, Social Support) revealed that the category C score decreased significantly more with NBIW bathing than with control bathing ( p = 0.0193, d = − 0.52 [medium]). There was a significant difference in the BJSQ (category C score), one of the primary endpoints, between NBIW and control. The sample size was confirmed post hoc: The sample size estimated by power analysis with the BJSQ category C score was 25, and the power was 0.81. In addition, we confirmed that no significant carryover effect occurred (Fig. 3 ). Comparisons of the percentage change from baseline for each item in POMS2 are shown in Table 2 . The item Confusion-Bewilderment decreased significantly more with NBIW bathing. A decrease in values or “Improvement in stress” were observed in both interventions in many items of the primary and secondary mental stress tests, but in both tests, the percentage changes were larger with the NBIW intervention than with the control intervention.
Sleep
The baseline PSQI-J scores (mean ± SD) were 9.3 ± 1.8 in the control-NBIW group and 9.4 ± 1.4 in the NBIW-control group, i.e., in both groups, the scores indicated poor sleep quality (Table 1 ); however, with the NBIW intervention, scores improved, i.e., decreased (to 5.8 ± 2.4; Supplementary Information 1 ), and the percentage change from baseline was significantly lower in the NBIW intervention than in the control intervention ( p = 0.0238, d = − 0.42 [small]) (Fig. 4 ).
Sleep was also analyzed by actigraphy, which provided data on bedtime, total sleep time, sleep latency, wake after sleep onset, sleep efficiency, time out of bed, and bed out latency. We observed that over 4 weeks, NBIW bathing generally improved sleep compared with control bathing. The results showed increased total sleep time and sleep efficiency and decreased waking after sleep onset, sleep latency, and bed out latency, and paired t tests showed that the decreases in sleep latency and bed out latency were significantly larger with the NBIW intervention than with the control intervention (Table 3 ). A power analysis of the items that showed significant differences between the interventions determined that a sample size of 27 participants would be necessary, assuming an effect size of 0.5, a probability of 0.05, and a power of 0.8. The post hoc calculations revealed larger effect sizes for sleep latency (1.43) and bed out latency (1.59).
Immune functions
The results of the lymphocyte subset analyses are shown in Tables 4 and 5 . The proportion of CD4 + cells, i.e., helper T cells, was significantly higher with the NBIW bathing intervention than with the control intervention (Table 4 ), whereas the concentration of CD8 + cells, i.e., cytotoxic T cells, was significantly lower with the NBIW intervention than with the control intervention (Table 5 ). In comparison, in the control intervention, the proportion of CD4 + cells was significantly lower and the proportion of CD8 + and number of CD8 + CD28 + cells, which represent the progenitor cells of CD8 + cells, were significantly higher (Tables 4 and 5 ). Although there was no marked change in CD4 + T cell counts in either intervention, CD8 + T cell counts showed a significant increase with the control intervention (Table 5 ). These results indicate that the post-intervention CD4 + to CD8 + T cell abundance ratio (CD4 + :CD8 + ) was higher in the NBIW intervention (although the difference compared with the control intervention was not significant) and showed a significant decrease with the control intervention (Table 5 ). CD4 + CD45RA + , i.e., naive T cells, significantly decreased with the NBIW intervention (Table 4 ), resulting in a significant decrease in the ratio of the naive to memory T cells ratio (N:M ratio) by the NBIW intervention (Table 5 ). At Week 4, the N:M ratio was not significantly different between NBIW and control. However, when compared with the baseline, it was increased by the control intervention and significantly decreased by the NBIW intervention (Table 5 ). The proportion of CD16 + CD56 - cells, i.e., mature natural killer (NK) cells, decreased significantly in the NBIW intervention. At Week 4, CD16+CD56− (%) was not significantly different between NBIW and control. When compared with baseline, it decreased significantly in the NBIW intervention but showed no significant decrease in the control intervention (Table 4 ). The proportion of CD20 + cells, i.e., B cells, and the number of B cells increased significantly after both the NBIW and control interventions (Tables 4 and 5 ). The proportion of CD3 + cells, i.e., mature T cells, in total lymphocyte counts did not differ significantly between the interventions. Neutrophil phagocytosis activity and NK cell activity increased significantly with both interventions. However, mean neutrophil phagocytosis activity at Week 4 was higher with the NBIW intervention than with the control intervention. As a result, effects on neutrophil phagocytosis activity were greater with the NBIW intervention than with the control intervention (Table 6 ).
Adverse events
A total of 10 adverse events were reported during the course of this study, 7 of which occurred during the control intervention. No significant differences in the occurrence of adverse events were observed between the 2 interventions. In addition, reports of abdominal pain, diarrhea (loose stools), weight loss, and elevated creatinine kinase during the use of NBIW were judged by the investigator to be not causally related to the study product. Fisher’s exact test did not find a significant difference in the incidence of adverse events between NBIW and control (P = 1.000, Effect size: φ = 0.0576). No severe or serious adverse events or adverse drug reactions were observed (Table 7 ). | Discussion
Balneotherapy, i.e., the treatment of diseases by bathing, has been practiced since ancient times for therapeutic and medical purposes, and its therapeutic effects on cardiovascular and dermatological diseases have been documented 25 – 27 . Our previous double-blind, placebo-controlled study of NBIW and our in vitro and in vivo studies suggested that NBIW increases body temperature, promotes blood flow via nitric oxide production, and improves mental stress and sleep quality 21 . However, the placebo control contained magnesium sulfate and sodium sulfate, both of which are known to promote blood circulation; hence, warm bathing in the placebo control also showed mild effects in some analysis items, making it difficult to analyze the effectiveness of NBIW. Therefore, we conducted the present randomized, open-label, crossover trial to further investigate the effects of bathing in NBIW. The study included assessments of immune function because of the close correlation between stress, sleep, and immune function.
A number of previous studies reported on the efficacy of balneotherapy in improving mental stress and sleep disorders, and a large-scale randomized controlled trial (n = 362) conducted in 2016 in Chongqing, China, showed that balneotherapy is effective in reducing mental stress and sleep disorders and alleviating general health concerns 8 . Moreover, given that balneotherapy reduces levels of cortisol, a stress biomarker, some authors have suggested that it may be beneficial in controlling mental stress 28 . In the stress assessment in the present study, BJSQ categories A (Job Stressors) and B (Stress Reaction) were significantly improved after both the NBIW and control interventions. On the other hand, category C (Social Support) was significantly improved by the NBIW intervention but showed no change with the control intervention. For Category A and B, bathing itself is seen to be effective in reducing stress. Because there was a significant improvement also in category C with NBIW bathing compared with standard bathing without any additions, NBIW bathing is considered to be more effective in reducing stress. Moreover, the questions related to BJSQ category C evaluate whether a study participant receives sufficient support from those around them. We suggest that NBIW bathing improved study participants’ mental condition, which resulted in a change in their interpersonal cognition.
Balneotherapy over a period of 2 to 3 weeks may have beneficial effects on sleep quality 29 . It is thought to improve sleep by lowering systemic blood pressure and core body temperature by dilating peripheral blood vessels throughout the body, reducing pain by inhibiting inflammation and pain-related substances, and relaxing muscles 30 . The decrease in blood pressure occurs when the parasympathetic nervous system is dominant. Dominance of the parasympathetic nervous system is also considered to improve sleep quality. In fact, percutaneous stimulation of the parasympathetic nerve was reported to improve sleep quality in retired veterans suffering from post-traumatic stress disorder 31 . The PSQI-J, one of the primary sleep endpoints in this study, assesses subjective sleep quality, including insomnia; the total score ranges from 0 to 21, and a score of 6 or more indicates a potential sleep problem 24 , 32 . In the present study, the PSQI-J score confirmed that the NBIW intervention led to significantly greater improvements in subjective sleep quality than the control intervention. Furthermore, in the actigraphy sleep assessment, which was used as an objective assessment of sleep, the mean values of each item during the 3-week period showed a significant reduction in sleep latency and bed out latency and a trend towards improved sleep in many items with the NBIW intervention compared with the control intervention.
Mental stress and sleep quality are known to interact closely, and studies have shown that sleep quality is reduced in stressful environments 33 , 34 and that stress can be reduced by improving sleep quality 35 . In this study, NBIW bathing was considered to improve the quality of sleep not only subjectively but also objectively. In our previous study, we demonstrated that the bicarbonate ions in NBIW affect endothelial cells, and through phosphorylation of endothelial nitric oxide synthase, promote synthesis of nitric oxide, which dilates blood vessels, leading to improved blood flow and temperature elevation. Moreover, 4-week NBIW bathing improved sleep quality according to PSQ-J score and reduced stress according to POMS2 scores 21 . Together with the result of this study, these findings indicate that NBIW may improve sleep quality and decrease stress by promoting blood circulation via production of nitric oxide and increase of body temperature.
A large number of studies have shown that mental stress increases the risk of a wide range of diseases and may be a risk factor for cancer and autoimmune diseases 36 , 37 . Mental stress is also particularly strongly associated with cardiovascular disease 38 . For example, several articles have reported that mental stress induces myocardial ischemia in patients with coronary artery disease 39 , 40 and that depression is correlated with cardiovascular disease 41 . In addition, the Japan Collaborative Cohort Study for Evaluation of Cancer Risk (JACC study), which was sponsored by the Ministry of Education, Science, Sports and Culture of Japan, identified mental stress as being associated with increased coronary artery disease and increased stroke mortality in women 42 , and in the large-scale international case–control INTERHEART study, psychosocial factors were found to more than double the risk of myocardial infarction 43 . One of the primary mechanisms underlying the onset of cardiovascular disease due to mental stress may be inflammation-mediated vascular endothelial dysfunction resulting from the release of inflammatory cytokines as a result of suppression of the parasympathetic nervous system 44 . A previous study showed that the bicarbonate ions in NBIW act directly on vascular endothelial cells to induce nitric oxide production through phosphorylation of endothelial nitric oxide synthase 21 . As such, these findings suggest that continued NBIW bathing with warm water may also reduce the risk of cardiovascular disease by improving both mental stress and vascular endothelial function.
Immune function is also thought to be closely related to stress and sleep 45 , 46 . Several studies have shown that stress is a risk factor for cancer and autoimmune diseases, suggesting that stress affects immune tolerance and anticancer immunity 37 , 47 . Furthermore, sleep deprivation is known to alter the secretion of inflammatory markers such as interleukins, tumor necrosis factor-α, other cytokines, chemokines, and acute phase proteins 48 , 49 .
In this study, changes in several immune factors were observed with NBIW and control bathing. The CD4 + :CD8 + T cell ratio did not change significantly with the NBIW bathing intervention, but it decreased with the control bathing intervention because of an increase in CD8 + T cells. CD8 + T cells are known to fluctuate in number and function in response to stress 50 – 52 . These suggest that the CD8 + T-cell count decreased because NBIW bathing resulted in a decrease in stress.
The proportion of naive T cells, as represented by CD4 + CD45RA + , decreased slightly after NBIW bathing, while the number of memory T cells showed increase, resulting in a lower N:M ratio. Thus, given that tissue-resident and circulating memory T cells play an essential role in anti-tumor immunity, NBIW bathing may enhance anti-tumor immune function 53 .
The proportion of CD16 + CD56 - cells, a marker of mature NK cells, decreased significantly in the NBIW bathing intervention. CD16 + CD56 - mature NK cells are increased by post-traumatic stress 42 , 54 , suggesting that NBIW bathing may reduce CD16 + CD56 − mature NK cells by alleviating stress.
The proportion of CD20 + B cell markers among total lymphocytes, the number of B cells, and the level of neutrophil phagocytosis and NK cell activity increased significantly after both interventions, suggesting that warm bathing itself improves immune function. However, the mean value of neutrophil phagocytosis activity after 4 weeks was higher with NBIW bathing than with control bathing, and the change was greater with NBIW than with control.
This study has some limitations. First, most of the participants were middle aged. Second, people with self-perceived daily stress were recruited. Third, stress is subjective. These factors may limit the generalizability of the results.
Taken together, the results of this study suggest that NBIW bathing has a positive influence on stress, sleep, and immune function. Stress, sleep, and immune function interact with each other 45 , 46 , so the effect of NBIW on immune function may be mediated through the improvements in stress and sleep. In addition, given that bicarbonate ions stimulate nitric oxide production in macrophage cell lines stimulated by lipopolysaccharide and interferon-gamma, which in turn promotes inflammatory responses 55 , 56 , it is conceivable that increased bicarbonate ions in the blood due to NBIW bathing may have a direct effect on immune system cells. In the future, further studies of such mechanisms are warranted. | We previously demonstrated that neutral bicarbonate ionized water (NBIW) bathing enhances blood flow by bicarbonate ions and described the underlying mechanism. However, additional clinical investigation was warranted to investigate the efficacy of NBIW bathing. Hence, we performed a randomized, open-label, crossover trial to examine the effects of NBIW bathing on mental stress, sleep, and immune function. Participants who regularly felt stressed were randomly assigned to NBIW or regular bathing for 4 weeks. Mental stress was assessed with the Brief Job Stress Questionnaire (BJSQ) and the Profile of Mood States Second Edition; sleep quality, with the Pittsburgh Sleep Quality Index Japanese version (PSQI-J) and actigraphy; and immune function, with laboratory tests. PSQI-J scores and actigraphy sleep latency and bed out latency improved significantly more with NBIW bathing than with regular bathing ( p < 0.05). Furthermore, NBIW bathing reduced both stress-induced fluctuations in CD4 + and CD8 + T cell counts and fluctuations in the naive to memory T cell ratio and neutrophil phagocytosis, indicating improved immune function. These findings suggest that daily NBIW bathing could improve mental stress, sleep quality, and immune function and bring about positive health effects in those who experience stress in their daily lives.
Subject terms | Supplementary Information
| Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-024-51851-9.
Acknowledgements
The study was funded by Cranescience Co., Ltd., Tokyo, Japan. Editing assistance was provided by Jacquie Klesing, Board-certified Editor in the Life Sciences (ELS), on behalf of Yamada Translation Bureau, Inc., Tokyo, Japan.
Author contributions
I.S. designed the study; T.Y., R.U.N., and S.N. contributed to the collection and analysis of study data; I.S., R.U.N., T.Y., D.O., N.M., H.I., C.N., and S.N. interpreted the data; and I.S., R.U.N., and S.N. wrote the article. All authors approved the final manuscript after critical revision of the manuscript and agree to accept responsibility for its scientific accuracy and consistency.
Data availability
The data used in this study are available from the corresponding author upon reasonable request.
Competing interests
IS a representative director of Cranescience Co., Ltd., Tokyo, Japan, and receives compensation and stock ownership. The other authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:56 | Sci Rep. 2024 Jan 13; 14:1261 | oa_package/40/14/PMC10787754.tar.gz |
|
PMC10787755 | 38218892 | Introduction
The need for digitization in the German healthcare system is undisputed. But in a globalized world, where various sectors of the economy can internationally interact without problems, the innovation and modification of healthcare systems specifically in Germany, have proven to be a difficult undertaking 1 .
The hurdles of digitization are ethically perfectly understandable and justifiable: physicians do not work with economic data, but more importantly with sensitive confidential information. The mishandling and possible privacy breach of secret patient data potentially poses serious real-world consequences for the patient 2 . The protection of the privacy of patient data and their confidential information therefore is the logically overriding maxim in the implementation of digital ecosystems in hospitals - especially when aiming for the goal of establishing paperless hospitals 3 . While innovation in the German healthcare system has picked up speed in recent years, there is room for improvement in a global comparison of the Electronic Medical Records Adoption Model (EMRAM) score 4 . The EMRAM Score is a tool to describe the level of digitization in hospitals on 7 levels with level 1 representing the lowest and level 7 the highest degree of digitization.
The need for digitized medicine does not only result from logistical, economic and ecological advantages, e.g. reduced storage space, less paper waste and savings in maintenance costs, but also leads to improved and more individual diagnostic and therapeutic concepts through the merger of local, regional and national hospitals into digital medical ecosystems. Furthermore, digitized medicine can enable more precise outcome analysis through improved and coherent longitudinal tracking of patient outcomes and provide feedback for therapeutic decisions. In the context of paper-based records, such data is either not captured at all or is lost over time 5 .
The creation of a digital medical ecosystem can thus establish better therapy algorithms and ensure significantly better treatment options on the premise of sharing medical data and the establishment of Big Data sets 6 . The sheer flood of highly scaled and information-dense data requires innovative technologies that further improve data sharing in medicine as it is almost fully accepted as standard and will develop further in the future 5 .
With the importance of the topic, it is understandable that digitization, digital ecosystem and cloud computing are buzzwords. In 2008, only 2 articles were published considering the above-mentioned topics, since then the number grown significantly with 820 articles considering cloud computing had been published in 2022 alone.
Several review articles address the theory of technical regulations and how to maintain adequate data integrity, confidentiality, anonymity and authenticity, only a few have given recommendations how to actually integrate a cloud system in a working clinic 7 – 9 . While in theory, a conversion to digital ecosystems is not really an issue, one has to face the reality that a complete conversion is a rather complex process 10 .
Therefore, we would like to present you the approach that we have successfully established at the Charité – University Hospital Berlin. Currently, there is no German hospital fully running on cloud computing. The approach described within this article characterizes a model approach to implementation of cloud computing within the framework of a running hospital information system (HIS). As this model approach is characterized by easy accessibility, HL7v2 – and FHIR interoperability as well as the possibility of using it without total integration into the current HIS, we aim to provide an example for other hospitals.
Cloud computing
Digital ecosystems are generally run on clouds. “Cloud computing” refers to the paradigm of delivering computational resources, including storage, processing power, and applications, as on-demand services over the internet. This model enables users to access and utilize these resources without the need for upfront infrastructure investments, allowing for flexible scalability and cost efficiency. Therefore, saving upfront investments for connection, individualized interfaces that cater to hospital’s needs and are able to adapt to new developments. Cloud computing is characterized by its service models, namely Infrastructure as a Sevice (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS), which offer varying levels of control and management for users. This technology has gained significant traction due to its potential to revolutionize IT infrastructures and support various industries, such as healthcare, finance, and entertainment. Notably, cloud computing has been acknowledged for its role in facilitating resource sharing, improving accessibility and enabling collaboration among geographically dispersed users 11 , 12 .
Currently, cloud computing is being used successfully in various areas of medicine: In the provision and processization of telemedicine services 13 , medical image analysis both for oncology services 14 and preoperative planning e.g. for hip arthroplasties 15 , and in the context of citizen health applications to process lifestyle-related data and recommend lifestyle changes 16 , 17 . Cloud computing finds therapeutic use in supporting treatment decisions 18 , early sepsis detection, and computation of complex procedures such as Montecarlo simulations for radiotherapy 19 , 20 . Furthermore, there are already some examples of the implementation of cloud computing as a clinical operating system: In China, for example, large regional hospitals exchange data about patients in a cloud with small grassroot hospitals. The usage of a cloud as SaaS leads to an investment reduction of around 90% while establishing sufficient and modern digital infrastructure 21 .
The predecessor of avant-garde cloud computing are HIS are often described as legacy systems or legacy interfaces referring to computing software that has been outdated by recent technological advances. While legacy interfaces still meet the needs they were designed for (e.g., assessment of personal data, connection between different clinical specialties), they are costly to maintain, use up both computational and physical space and make innovation of their systems difficult. A simple switch to cloud computing could overburden legacy system manufacturers organizationally and financially, so a step-by-step migratory or integrative approach is preferable.
In comparison: cloud computing enables ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services). These resources are accessible with minimal effort or extensive provider interaction 22 .
Currently, no complete cloud HIS software solution has been implemented in Germany. However, 98% of healthcare organizations are already running at least one of their applications in a cloud 23 . The global adoption of the FHIR standard is spearheaded in Germany through projects like the Medical Informatics Initiative, the Berlin Institute of Health (BIH) Health Data Platform or the AIQNET project, laying the foundation for the interoperability of medical data. The consensus along experts - not only from Germany - points out that the healthcare system is more than ready for the start of implementation of cloud-based medical data applications, which have the potential to create a decentralized, interoperable ecosystem for the legitimate use and exchange of medical data 24 . The use and improvement of Cloud Computing is mainly driven by the advantages of resilience, networking, and strict adherence to data protection 25 . The technical advantages of cloud computing cannot be denied. Apart from various technical challenges, which primarily occupy software developers, the implementation in everyday clinical practice is primarily dependent on the preservation of data protection and medical confidentiality in the processing, transfer, storage and retrieval of sensitive data 26 – 28 .
Legal requirements for cloud computing in Germany and the EU according to GDPR
While healthcare remains to be a nationally regulated matter, the EU’s limited competences regarding the health care system do not apply to data protection competences 29 . EU countries and therefore also German hospitals are subject to the EU’s General Data Protection Regulation (GDPR) 29 . The GDPR regulates the processing of personal data by or on behalf of hospitals. According to GDPR, the processing and storage of personal data can either be a fully, partially or not at all automated process. This includes processes within clinical interfaces and the tasks they are supposed to fulfill 30 .
Based on Art.28 of the GDPR, the use of an external service provider (e.g., cloud computing providers) bound by instructions for the processing of personal data is possible under the general premise that data is not forwarded to third parties. From a legal point of view, this describes commissioned processing; the supervising client – meaning the person or company that has transferred the processing of the data to a cloud provider - remains responsible for the data and its security. According to Art. 4(7), 4(8) and 4(10) of the GDPR, the processor is not a third party, rather the processing is attributed to the controller 30 . The controller is defined as being a person, company or other organization responsible for determining how personal data is used. It is of upmost importance to clearly define the roles of every involved party.
Furthermore, the GDPR requires the precise treatment of the subject’s data and duration of the processing, the nature and purpose of the processing, the type of data, categories of data subjects, and the obligations and rights of the controller. According to Art. 28(3) of the GDPR, processing of personal data may only be carried out on instruction and by authorized persons. The third-party provider must consult the responsible person regarding technical infrastructure, compliance with obligations and the handling of any data protection breaches (Art. 33, 34 GDPR), apart from technical and organizational standards 31 . This means that both the clinic as well as the company supplying the cloud computing SaaS have to legally define the handling of the subject’s data and above-mentioned criteria of processing.
Medical confidentiality in Germany is determined by medical professional law (§ 9 MBO-Ä), the treatment contract (§§ 630a-630h BGB) and criminal law. Every violation of it is considered a criminal offense. Violations describe unauthorized disclosures of other people’s confidential information (from the personal sphere, business or trade secrets) entrusted to him as a member of a medical profession. Violations of confidentiality are punishable by fines or imprisonment 32 .
Considering potential collaborations with external service providers that operate cloud computing in third countries is accurately as well as restrictively regulated by the GDPR. In order for a European or German hospital to be able to cooperate with an external service provider with its headquarters in the USA, a sufficient level of data protection must be guaranteed in accordance with Art. 44 of the GDPR 33 . Such adequacy decisions have been described for the following countries: Andorra, Argentina, the Faroe Islands, Guernsey, Israel, the Isle of Man, Japan, Jersey, New Zealand, Switzerland, Uruguay and, to a limited extent, Canada. Since the 10 th of July of 2023 the European Commission has adopted its adequacy decision for the EU-U.S. Data Privacy Framework. Therefore, data transferred between the countries is currently considered as protected as it is in the EU. However, the Austrian data privacy activist Max Schrems, who brought the previous two US-EU regulations to a fall, sees the current data privacy regulations nearly unchanged from the previous versions. It remains to be seen whether the current regulations will stand up to review by the EU court of justice, so there is still uncertainty regarding the use of data processors based in the USA.
Legal requirements for cloud computing in Germany according to German federal law, state law and hospital law
Since 2017, according to §203(3) para. 4 of the StGB (German Criminal Code), it is possible to involve external service providers for assistance activities for health care professions, which are described under §203 STGB 33 . After the reform, cloud computing providers have been regarded as aiding and abetting the persons responsible for the secrecy of judgments. This development should be rated as a sign from the federal government that the necessity of the development towards cloud computing must also be simplified in the legislative level. The reduction in the protection of secrets is compensated for by the inclusion of the external service provider in the criminal liability for violations. Furthermore, the Confidentiality Reform Act 34 describes that the economic advantage of storing data on an external information technology system (cloud) may be used if data protection is complied with.
The law also stipulates that the client and external service providers must ensure that the latest technical and organizational measures are in place to prevent the leak of personal data. Possible measures include anonymization, pseudonymization or data encryption using a key from the confidentiality provider 35 .
Furthermore, the federal division of Germany poses further difficulties for the implementation of digital medical ecosystem as the federal states own hospital laws can be restrictive to varying degrees: For example, in Berlin, the Federal Data Protection Act (BDSG) 36 applies to all hospitals in public or rural ownership. According to § 24(7) S3 37 , 38 , the state hospital law (LKHG) permits the access to patient data by the contractor if it is ensured by technical protective measures that no personal reference can be established.
In their independence, the federal states are governed by their own hospital laws, which can be restrictive to varying degrees from state to state.
Aligning GDPR, BDSG and LKHG to implement cloud computing
How can the balancing act between the necessary digitization and the necessary protection of patient data be resolved? We would like to present the approach at a large German university hospital.
Two main methods were implemented to not only comply with German and EU data privacy regulations, but also international requirements: 1) Separation of personal health information (PHI) from medical data and 2) strong encryption of the data, with the storage and access of the encryption keys being restricted to the hospital as data owner or a trusted third party. Allowing a patient to consent or opt-out of the data collection within the platform is a third important aspect. Education and consent of the patients related to the data processing tasks is deferred to the individual hospitals according to their applicable legal constraints. In order to therefore solve the complex interplay of the GDPR with the national and local data sets, consent within the digital ecosystem was characterized as “broad consent” on the basis of § 6(1a) and § 9 (2b) of the GDPR. This grants the possibility for data analysis for various research purposes.
Privacy regulations vary not only internationally, but also within the EU and even within the federal states within Germany. Furthermore, the interpretation of privacy regulation fluctuates and is discussed controversial. Pseudonymized data was considered equivalent to anonymized data if the connection between data and identity cannot be made by a party other than the one possessing the pseudonym-to-identity table 39 . Over the past years, this view changed in a way that anonymized data is only data that cannot be linked to a person by any foreseeable means 40 . With not all medical data in clinical research, even if de-identified, are considered equivalent to anonymized data, patients typically need to be informed of the purposes and all involved parties for processing the data. This is very time-consuming and leads to lower acceptance rate by the patients. Research organizations attempt to generalize the purpose and data processing and exclude any commercial purposes through different types of a “broad consent” in which a patient may agree to data processing for clinical research purposes.
Another way followed by the AIQNET consortium is the clear definition of consented datasets, which are based on legal requirements, such as the collection of data related to the safety and performance of specific groups of and medical devices. The collection and analysis of such data is required by the Medical Device Regulation (MDR) 41 , 42 . Thus, the MDR serves as legal basis for the collection and processing of clinical data related to the safety and performance of medical devices, according to Art. 6 (1c) of the GDPR 29 .
Integration of AIQNET at the Charité – University Medicine Berlin
AIQNET, with the Charité as one of its founding members alongside Raylytic GmbH (Leipzig, Sachsen, Deutschland) and BKK B. Braun Aesculap (Melsungen, Hessen, Deutschland) is a consortium consisting of 16 established organizations from various medical-relevant sectors that won the AI competition of the German government in 2019. Work on the infrastructure and applications based on it has been ongoing since January 2020 and is receiving funding from the Federal Ministry of Economics and Climate Protection (BMWi). Currently, it is possible to become a part of AIQNET as an associated partner after careful validation of the consortium. This federally funded project is the first model approach that implements cloud computing for medical AI applications, while establishing means for HIS connectivity and secondary use of medical data for research and compliance purposes as mandated by the MDR.
First and foremost, implementing such an ecosystem in a hospital that is currently still operating with a legacy system and has to maintain its functionality without any capacity for downtimes, is no simple undertaking. By analyzing legacy systems and creating connections in between the systems, AIQNET ensures a step-by-step integration. It processes unstructured and structured information from different medical systems and applications. The connection is established via an integration server that masters the protocols HL7v2, Fast Healthcare and Interoperability Resources (FHIR) and DICOM (Digital Imaging and Communcations in Medicine). This enables the extraction of medically relevant data from legacy systems. As a result, AIQNET supports clinics in automating internal processes.
A migration – or rather integration – of AIQNET involves 1) installation of a virtual machine within the hospital intranet, running the integration server 2) configuration of the transformations between the connected systems and the integration server and 3) configuration of the data collection task (surveys, follow-up time periods, data validation etc.)
AIQNET is operated on the UNITY Platform, which represents a granular software-as-a-service module in which various microservices are operated: a DICOM viewer with the option of AI analysis, automated data acquisition, case documentation and outcome recording with patient-reported outcome and experience measurements (PROMs, PREMs). The UNITY platform is developed by the company Raylytic GmBH (Fig. 1 ) and is the first and only digital solution to do automatic collection of clinical data and AI-powered medical image analysis. The Unity platform is already compliant with GDPR, Google Cloud Platform (GCP) and the Health Insurance Portability and Accountability Act (HIPAA). Furthermore, it is certified according to Information Security Management (ISO) 27001 and ISO 13485. Currently, the UNITY Platform is integrated into the Charité infrastructure for testing purposes. The Spine department of the Charité’s Center for Musculoskeletal Surgery uses it to collect both PROM and PREM data. Also, it has been implemented and prepared for AI analysis of Big Data sets such as pre – and postoperative images of the lumbar spine and whole spine, respectively. The reliability and validity of the AI-analysis software has been proven in prior studies 44 .
The software is characterized by a clear-structured, fine-granular management, which allows only authorized and trained persons access to data with different degrees of access to information regarding patient identity, based on the role assigned. To facilitate this functionality, medical data must be analyzed, de-identified, and linked to a patient via a pseudonym prior to submitting data to the platform. The submission itself occurs through encrypted message protocols, and the storage at rest on the cloud system is encrypted.
The above procedure is exemplified by the upload in the AI-enabled DICOM viewer of RAYLYTIC GmbH. Only authorized staff can upload images via the “RayView DICOM Viewer” after sufficient de-identification. Anonymization and removal of metadata is ensured before upload and storage. An image “fingerprint” reduces replicative uploads of data. Each of these data is further subject to internal quality control to ensure completion of the above requirements and suitability for subsequent analysis processes.
The storage of the data takes place in accordance with the GDPR and is securely stored on the local Charité servers. Strict adherence to minimize the storage of personal data is ensured. The design of AIQNET (Fig. 2 ) foresees 1) PHI never leaving the hospital and 2) only the medical data needed will be collected from the patients or pulled from the Electronic Health Records (HER) systems. Data in transit and at rest is always encrypted. Software architecture, implementation, personnel competence and physical access needed to be independently assessed and a certification according to ISO 27001 was obtained.
In order to ensure that internal quality controls are up-to-date and free of errors, the UNITY Platform and utilized AI algorithms are undergoing regular internal controls regarding the validity of the algorithms and the resulting data integrity.
In view of the legislative requirements, consideration and fulfillment of all these requirements may seem overwhelming in parts. However, within the framework of a funded pilot project, we would like to prove one efficient possibility of implementation and the resulting benefits.
The consortium is focused on establishing interoperability, structuring data with the help of AI and creating a legally secure framework for data-based patient care. In the future, for example, the performance and safety of medical devices can be demonstrated objectively and largely in the safe framework of AIQNET. By contrast, the current legacy implementations of electronic healthcare record systems (EHR) largely prevent the aggregation of data for purposes such as benchmarking or answering highly relevant questions, such as patient outcome associated with a particular type of treatment, device or pathology, and as such, preventing evidence-based precision medicine. Cloud computing, based on an open, standardized data model could help in the transition of hospitals from minimal-interoperable systems to more specialized, interacting services that advance medicine. However, interoperability should not be limited to the local network, but should enable regional, national and, if necessary, international interoperability by means of a coding system commonly used in the consortium – like the Logical Oberservation Identifiers Names and Codes (LOINC) system. With this it is possible to use the UNITY platform of AIQNET, for example, to pool results of Patient-reported outcome measurements (PROMs) by connecting several hospitals within the ecosystem. The UNITY Platform can be used to facilitate Big Data studies, regional and national projects, and prospective, multicenter studies. Data exchange will be further enhanced by the integrated translation of medical and clinical data into a universally exchangeable FHIR format 43 , 44 .
Cloud computing interfaces that exhibit functionality via an Application Programming Interface (API) empower third parties to access their data programmatically through self-developed or 3 rd party applications. A third party is defined as a natural or legal person, public authority, agency or body other than the data subject, controller, processor and persosn who, under the direct authority of the controller or processor, are authorized to process personal data. Examples are data analytics or process automation applications. The AIQNET consortium has streamlined the development of such applications by standardizing on FHIR as data model and “SMART on FHIR” as healthcare IT-systems API definition.
Access of third parties to the data is currently possible via exports of raw data in neutral formats, such as CSV. The exports can be scheduled and transferred via sFTP. In the future, we plan to allow access to the FHIR data through the Smart on FHIR API.
Based on this, AIQNET members are able to develop applications to perform administrative tasks in health care, e.g., to follow-up with a patient, to submit data to a medical registry, to automate the generation of case documentation, or to provide analysis algorithms to aid in treatment decision making. Due to the close cooperation between industry, research and healthcare institutions and the consecutive access to technical and scientific data, the ecosystem’s partners will increasingly benefit for the growing number of applications and medical insight provided through a legally and technically secure, validated framework.
The management of the cloud infrastructure not only ensures a high level of security against the continuously evolving cyber-security risks, but also means that the in-house IT staff can concentrate on user support and infrastructure needs over solving specialist issues related to the maintenance and compliance of legacy systems. Ultimately, by basing the AIQNET ecosystem on open standards and a data model that is receiving a high adoption rate by innovative EHR providers and fairly new players in the healthcare IT space, such as Apple, Amazon Web Services (AWS), Google and Microsoft, the AIQNET participants benefit from long-term investment security of their own development efforts and a growing selection of software applications and human talent.
The close cross-hospital collaboration between pharmaceutical and medical device companies improves the control and monitoring of new products: AIQNET creates an ecosystem for the broad use of health data for research and evidence-based medicine, while complying with legal requirements (compliance). Pharmaceutical and medical device companies also benefit from AIQNET, as they are required by regulatory requirements such as the Medical Device Regulation of the EU (MDR) to continuously monitor their products as part of post-market surveillance (PMS). With AIQNET, hospital data of routine care is generated in a data protection-compliant manner for the testing of the safety and performance of medical devices by notified bodies.
The step towards integration of cloud computing shows the need of cloud computing to keep pace with the rapid development of medicine, the dramatic increase of medical data storage and need for regional, national and international interoperability. Transparent explanations of systematic integration of cloud computing in well-running hospitals are rare and therefore this model approach can be used as a guide. Careful consideration has to be applied when considering privacy regulations within the different member states of the EU, making every case an individual one.
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article. | With the advent of artificial intelligence and Big Data - projects, the necessity for a transition from analog medicine to modern-day solutions such as cloud computing becomes unavoidable. Even though this need is now common knowledge, the process is not always easy to start. Legislative changes, for example at the level of the European Union, are helping the respective healthcare systems to take the necessary steps. This article provides an overview of how a German university hospital is dealing with European data protection laws on the integration of cloud computing into everyday clinical practice. By describing our model approach, we aim to identify opportunities and possible pitfalls to sustainably influence digitization in Germany.
Subject terms | Supplementary information
| Supplementary information
The online version contains supplementary material available at 10.1038/s41746-024-01000-3.
Author contributions
Conceptualization, M.P.; N.T.; methodology M.P., T.K., S.T., M.D.; F.T., N.T.; resources, M.P.; writing—original draft M.P., N.T.; writing—review and editing, T.K., S.T., M.D., F.T.; supervision, M.P., N.T., project administration, M.P. All authors have read and agreed to the published version of the manuscript.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Data availability
All data relevant to the study are included in the article or uploaded as supplementary information. Data are available on reasonable request.
Competing interests
The authors declare no competing interests | CC BY | no | 2024-01-15 23:41:56 | NPJ Digit Med. 2024 Jan 13; 7:12 | oa_package/36/19/PMC10787755.tar.gz |
||||
PMC10787756 | 38218993 | Introduction
Groundwater is a crucial source of drinking water, and its availability is essential for economic growth in urban and rural areas worldwide 1 , 2 . Groundwater is less vulnerable to contamination and pollution than surface water and is widely used for domestic purposes 3 , 4 . Due to its high percentage, reduced sensitivity to pollution, and large storage capacity, groundwater is more important than surface water at a socioeconomic level worldwide. Groundwater undergoes a natural filtration process that removes bacteria and odors, making it suitable for drinking 5 . Groundwater has many advantages, including meeting water supply needs for industrial, agricultural, and other sectors. In many parts of the world, groundwater is the primary source of fresh water with 50% of portable water demands being met by groundwater, 40% of which is used for industry, and the remaining portion used for irrigation 6 . As the world's population grows, its dependence on groundwater also increases, with 33% of people depending on it to meet their daily needs 7 , 8 . Unfortunately, more groundwater is being consumed than replenished or recharged, which stresses on the availability of this precious natural resource. As a result, groundwater overuse has led to declining water tables, declining water quality, and ongoing frequent land subsidence activities 9 .
Despite all the advantages of groundwater, many nations, especially developing countries like India and Bangladesh, are quickly experiencing a crisis of diminishing groundwater quality due to misuse and contamination 10 . Groundwater contamination can result from both natural and human causes. However, the quantity and groundwater quality are highly vulnerable today. Anthropogenic activities have accelerated the rate at which the quality of groundwater is declining. Unplanned land-use activities owing to industrialization and subsequent urbanization have led to rising groundwater contamination in recent decades 11 . Urbanization increases impervious surfaces, worsens ephemeral runoff, increases flood risk, and reduces subsequent groundwater recharge. In addition, saltwater intrusion exacerbates the situation in coastal areas, severely threatening the city's water supply and lowering the living standards of residential homes 12 . Human activities such as irrigation and climate change are currently impacting groundwater quality and increasing its sensitivity to contamination on a broader scale 13 . Chemical fertilizers are exacerbating the critical issue of nitrate poisoning of aquifers 14 .
One of the largest natural groundwater catastrophes for humanity has been reported to be arsenic (As) pollution in the groundwater. A study revealed that only five Asian nations, namely “Taiwan, China, India, Bangladesh, and Thailand”, were acknowledged as having groundwater contamination due to As in the late twentieth century 15 . At least 100,000 individuals in these impacted nations are exposed to As poisoning through their drinking water. In India, several places in the Brahmaputra and Ganges River’s floodplain have been affected by groundwater contamination with arsenic at levels higher than the permitted limit of 10 μg/L 16 . Currently, Bangladesh and India have the highest number of As contaminated areas and associated health issues 17 . Specifically, the Ganges delta of Indo-Bangladesh region is highly affected by groundwater contamination due to As 14 , 18 , 19 . These floodplains are made of recent alluvial aquifers from the Holocene period that originated in the Himalayan region 20 . Consequently, people in these afflicted areas have been regularly exposed to drinking water from hand tube wells contaminated with arsenic. In Bangladesh, using contaminated surface water resources, such as, ponds, rivers, and shallow dug-wells has led to water-borne illnesses like cholera, diarrhea, and dysentery 21 . Recent estimates suggest that 5 million people in West Bengal's North 24 Paraganas district consume water with an arsenic concentration of more than 50 μg/l and As rich causing approximately 50,000 people to develop skin sores in West Bengal. The population impacted in the geographic area of the issue are alarmingly growing each year 22 . Moreover, As has also entered the food chain through rice (paddy) production in the Indo-Gangetic plains via irrigation water carrying As 23 .
From a methodological viewpoint, statistical, machine learning (ML), and artificial intelligence have been utilized to evaluate groundwater vulnerability globally. ML algorithms offer several advantages over statistical methods, as they can efficiently analyze large datasets 24 , 25 . Additionally, geospatial approaches provide quick, efficient spatial, temporal, and spectral analysis of data over a wide area 26 . As a result, numerous researchers have combined geospatial technology with ML algorithms, such as the deep learning network used by Elzain et al. 27 in south Korea, the BRT model used by 28 in Iran, RF used by Pal et al. 14 in coastal areas of West Bengal, and the Bayesian model averaging (BMA) used by Gharekhani et al. 29 in West Azerbaijan, Iran, to assess groundwater vulnerability.
Considering the ongoing phenomena related to groundwater resources worldwide, particularly in the Ganges delta, very few studies have been undertaken that couple hydrochemical factors with ML algorithms 30 – 32 . Literature review on groundwater vulnerability highlights that the Ganga–Brahmaputra delta in the Indo-Bangladesh region stands out as a significant area globally affected by arsenic contamination 17 , 33 . The widespread use of tube wells for water supply in the Ganges Delta is a critical concern, leading to severe arsenic poisoning. In addition to arsenic, some regions in India also face challenges with elevated fluoride levels in groundwater. The Ganges Delta, marked by high population density and robust agricultural activity, necessitates sustainable water resource management to ensure optimal utilization. Hence, our study focuses on assessing groundwater vulnerability in the Ganges delta, emphasizing the urgency of effective water resource management in this crucial region. Therefore, researching groundwater vulnerability in this area is crucial for managing groundwater effectively and making it safe for consumption. In this regard, the presents study uses geospatial techniques and ML algorithms, including the LR, RF, and ANN models, to evaluate groundwater vulnerability in the study region. The distinctive aspect of this research resides in integrating statistical, ML, and neural network algorithms with hydrochemical factors. This fusion aims to comprehend the fluctuations in modeling outcomes and their corresponding spatial distribution in such a vast region. Furthermore, quality assessment for irrigation water in this study region has been assessed using USSL and Wilcox’s diagram. This study has distinctive contribution in its novel outcomes and insights, which contribute optimal perspectives to the current body of existing literatures. Furthermore, the research insight into regional disparities, illuminating differences or distinctions within a specific geographical area. The outcomes of this study will be helpful to environmentalists and policy-makers in planning for the local people regarding the safe consumption of water resources. | Materials and methods
Methodology
In this study, the following methodological steps have been followed to fulfill the current research objectives: In the initial stage, 352 water samples were collected from the existing tube-wells in the field to assess different hydro-geochemical properties. Additionally, 352 non-sample points were created for modelling purposes. Furthermore, it is necessary to divide the dataset into train and test to assess how well machine learning models are performed. Where, train dataset is used to fit the model and the test dataset is used for validation of the respective model. The entire dataset was split into two categories in a “70:30 ratio for training and validation” of the respective models. A total of fifteen hydro-geochemical parameters were identified for modelling groundwater vulnerability. These parameters are “Depth (m), pH, EC (μS/cm), Salinity (ppt), Ca 2+ (mg/l), Mg 2+ (mg/l), Na + (mg/l), K + (mg/l), Cl − (mg/l), HCO 3 − (mg/l), NO 3 − (mg/l), SO 4 2− (mg/l), PO 4 2− (mg/l), F − (mg/l), As (μg/l)”. Statistical analysis, including “Pearson’s correlation coefficient, principal component analysis (PCA) and multicollinearity (MC)” test, was conducted to understand the nature of data. Statistical, ML and neural network algorithms i.e., “logistic regression (LR), random forest (RF) and artificial neural network (ANN)” were used for groundwater vulnerability assessment. Statistical evaluation metrices, such as “sensitivity, specificity, AUC-ROC, F score, Kappa coefficient, and graphical measures such as the Taylor diagram” were used to optimize the assessment of modelling output. “USSL and Wilcox’s diagram” used to assess groundwater quality.
The following sub-section described in details regarding the methods used in this study.
Sampling and inventory dataset
Field-based water sample collection was the primary task to prepare several hydro-chemical parameters for assessing groundwater vulnerability. In this regard, a “random stratified” sampling method was used to collect water sample across the study region. A total of 352 water samples were collected to prepare the inventory dataset (Fig. 1 ). Standard procedures were followed during the collection of water samples. Sampling was done by running wells for 5 min as it removes the stagnant water from bore wells as well as hand pumps. The sample tube well was kept pumping until the pH and EC achieved stable conditions. Two independent (dry and clean) sample kits were used each with its own collection methods and safety measures, to keep the water samples that were taken. In order to transport each water sample from the field to the lab and keep it at 4 °C, we stored it in a water sample kit during sample collection. Measurements were made to analyze the groundwater samples obtained both on-site and off-site. The analyzed samples were split into two categories on the ArcGIS 10.4.1 platform based on a ratio of 70:30. The sample was used for training (70%) while the other was used for validation (30%).
In our current research, we opted for dry season (March–early June) data to model and map groundwater vulnerability in this susceptible region, excluding wet season data. Existing literature indicates a prevalent use of dry season data in studies related to arsenic-induced vulnerability studies 40 , as it is deemed more suitable for assessing vulnerability to arsenic-related risks. In the wet season, groundwater contamination occurs through the percolation and infiltration of surface water, facilitated by ample rainfall. This leads to the transfer of various particles, metals, and ions from surface water bodies to groundwater, resulting in temporary water contamination, which is not ideal for assessing water-related health hazards. In contrast, during the dry season, water levels remain normal, and there is no risk of water contamination through surface metals or other substances. Therefore, based on these considerations, we have exclusively utilized dry season data in our study.
MC test
To ensure the accuracy of the model’s output, it is crucial to select appropriate parameters for any vulnerability assessment. To achieve this, MC analysis is one of the most important techniques. Correlation analysis has shown that a link between two or more input variables can create deviations. “Tolerance (TOL) and Variance Inflation Factor (VIF)” are two statistical measures often used to test multi-collinearity among distinct components. The predictor variables have a high degree of multicollinearity when the “TOL value is < 0.10 and the VIF value is > 5”. If the MC result exceeds this limit, the highly correlated factors are not suitable for modelling purposes and should be removed from the dataset; otherwise, the output result will not be optimal. The equations for TOL and VIF are presented below: where is the R-squared value of regression using the j on all other variables regression model.
Adopted methods for groundwater vulnerability modelling
LR
One can create a multivariate regression relationship between a dependent variable and several independent factors using LR. LR is a multivariate analysis model that can be used to forecast the existence or absence of a characteristic or result based on the values of many response variables. Many studies used LR as a standard or conventional way to verify the effectiveness of a new algorithm in vulnerability studies. The benefit of LR is that, unlike traditional linear regression, where the variables must all have normal distributions, it can use any combination of continuous and discrete variables as well as appropriate link functions 41 . The challenge in conducting vulnerability analysis using a LR model is choosing the appropriate sample size for the dependent and independent variables 42 . The components in multi-regression analysis must be numerical, and the variables in discriminant analysis, a related statistical model, should have a normal distribution. After converting the dependent variable into a logit variable, the LR procedure uses maximum likelihood estimation 43 . This is how LR calculates the likelihood of a specific event occurring 44 . The fundamental idea behind LR is investigating a problem in which a result assessed using dichotomous variables i.e., true or false (0 and 1) is determined based on a single or a series of independent factors 45 . The LR can expressed by the following equation:
where indicates a linear combination of a constant and the independent variables’ product, and their corresponding coefficients. The value of z varies from − ∞ to ∞, subsequently f(z) ranges from 0 to 1”: where indicates constant; represent the coefficients and are the independent variables”.
RF
The RF model is a reliable AI method for classifying various natural hazards, including groundwater vulnerability. Breiman 46 proposed a potent ensemble-learning method called random forest, which is one of the most widely used classifier ensemble techniques for feature selection, regression, and classification applications. RF is a tree-based ensemble learning technique that builds several decision trees while constructing models. Each tree structure in the ensemble model uses the original input data to train a bootstrapped sample 47 . Decision trees use a collection of binary rules to select a target variable. The data used to train the model comprises the target variable being predicted and a set of predictor variables. Using the predictor variables, the decision tree divides the data into homogenous datasets based on the target variable. The programme then assesses each predictor variable's ability to categorize the predicted value into the two groups. The splitting process continues until there are no more splits to be made 48 . RF prediction is viewed as the unweighted majority of class votes when solving classification issues. The bagging approach is used to select random samples of variables as part of the training dataset for model calibration 49 . The algorithm for RF is expressed as follows: where represents flood occurrence conditioning factors; 1, 2,...n are input vector x .
In a RF the general errors can be defined as follows:
where x and y indicate the different flood occurrence conditioning factors, and mg represents the margin function. Again, margin function” can be described as follows
ANN
The ANN is a computational method that can obtain, display, and compute mapping from one multivariate data space to another. The objective of the ANN model is to provide a technique for forecasting results from inputs that have not been used in the modelling process 50 . An artificial neural network is trained using a series of examples of related input and output values. The goal of an artificial neural network is to create a model of the data-generation process in order to generalize and predict outcomes from inputs that it has never seen before. Back-propagation learning is the neural network approach that is most often utilized in the ANN model 51 . This neural network learning technique has three levels: an input layer, hidden layers, and an output layer. The network is trained using the back-propagation technique until a predetermined minimal error between the network's desired and actual output values is reached. When training is complete, the network is utilized as a feed-forward structure to provide a classification for the entire database (Paola and Schowengerdt 52 ). The ANN assigns each input element a specific weight, multiplies the results, adds them up, and then uses a nonlinear transfer function to construct the outcomes. The back propagation of the ANN model is expressed by the following equations:
The net input of jth neuron of layer l and I iteration
Factor for neuron jth in the output layer ith factor for neuron jth in the hidden layer ith where is the momentum rate and is the learning rate within this model.
Selected evaluation measures
Evaluating a model's performance, which establishes whether it is relevant or not, is one of the key goals of model comparison. In the geoscientific discipline, assessment metrics for applied models are crucial to estimating their best-case performance in making predictions, especially for modelling approaches based on machine learning. Henceforth, several evaluation measures have been used by many researchers in different fields of study to optimally assess the modelling output 14 , 24 , 53 , 54 . After a rigorous literature survey, five prevalent evaluations metrics i.e., “sensitivity, specificity, PPV, NPV, ROC-AUC, Kappa-coefficient and F-score”, were selected for this study. Alongside, the Taylor diagram is also applied in this study, which is a graphical representation of evaluation measures expressing the relationship. A useful tool for displaying and assessing classifiers is the “receiver operating characteristics (ROC) curve” the common name for a performance indicator for classification problems at different threshold levels is the AUC-ROC curve. The ROC curve, which is a graph based on the true positive rate (sensitivity) and the false positive rate (1-specificity), may be thought of as a statistic that measures how well the model performed overall 55 . The AUC-ROC value ranged from 0 to 1 and indicates a poor and good performance accordingly 56 . The following formulas were used to create the performance evaluation criteria for this study:
Here, “TP is true positive, TN is true negative, FN is false negative, FP is false positive, and kappa coefficient is represented by k, observed samples by and predicted result by ”. | Result
Statistical measures of selected hydrochemical parameters
In this study, three statistical tests were conducted on the selected hydrochemical dataset: MC, correlation coefficient, and PCA. The MC test (Table 1 ) showed that all factors were within the threshold value of MC, and therefore suitable for modelling purposes., The depth factor had the highest TOL and lowest VIF (0.66 and 1.515 respectively), while the Ca 2+ factor had the lowest TOL and highest VIF (0.38 and 0.632 respectively). Pearson’s correlation coefficient was used to understand the nature of the substantial association between physical and chemical properties. The correlation coefficient (r) ranges from − 1 to + 1, with values of 0.5, 0.5–0.8 and 0.8 indicating weak, moderate, and strongly correlation, respectively. The highest correlation values were found between pH and K + (0.952) and EC and CI (0.973), while moderate relationships were found between pH and salinity (0.546), pH and Mg 2+ (0.506), EC and Na + (0.644), Mg 2+ and K + (0.593), Na + and CI (0.613), and the lowest values were found between Ca 2+ and Mg 2+ (0.422), EC and Ca 2+ (0.365), depth and HCO 3 − (0.359), etc. Details about the correlation coefficient map and table are presented in Fig. 2 and Table 2 . PCA analysis showed that PC 1 consisted of 43.21% eigenvalue, followed by PC 2 and PC 3, which had 31.02% and 17.08% eigenvalue, respectively. In PC 1, the dominant factors were EC (0.933), salinity (0.927), Mg 2+ (0.874) and CI (0.924), while important factors in PC 2 important factors were F − (0.765), As (0.599) and HCO 3- (0.582) and in PC 3 dominant factors were PO 4 2− (0.620), NO 3 (0.582) and K + (0.339). The biplot map of PC 1, PC 2 and PC 3 is presented in Fig. 3 .
Assessment of groundwater vulnerability
Groundwater vulnerability in the aquifers of Ganges delta was assessed using LR, RF and ANN models, and the results are presented in Fig. 4 . Statistical, ML, and neural network algorithms were used to understand the spatial distribution of groundwater vulnerability in the vulnerable mega-delta region. We used ArcGIS 10.5 software to map the final spatial distribution of vulnerability using the respective modelling outcomes. Each map was classified into five vulnerability zones namely “very low, low, moderate, high and very high” using “Jenk’s natural break method”. The final vulnerability maps show that very high groundwater vulnerability zones are found in the eastern and some isolated south-eastern and central middle portions. Conversely, very low groundwater vulnerability zones are found in the north-western, eastern, and south-western parts. The moderate vulnerability zone is found in the central part and isolated patches of the south-eastern and southern parts of the study area. Due to the high concentration of As and other contaminated factors in the groundwater, the eastern part of the Ganges delta, i.e., the region of Bangladesh, is very vulnerable to groundwater compared to the western part of the delta region, i.e., the state of West Bengal in India. Although two isolated patches are found to be in the very high vulnerable zone in the western region of the Ganges delta i.e., part of India in RF and ANN models (Fig. 4 ).
Importance hydrochemical parameters for groundwater vulnerability
It is a fact that all selected hydrochemical parameters have not equal responsibility for groundwater vulnerability assessment in this study. Therefore, it is fundamental to determine the dominant factors in each applied learning model for groundwater vulnerability. The most dominant factors were identified for the three applied models, i.e., LR, RF, and ANN. The results of the dominant factors for groundwater vulnerability are presented in Table 3 for the three applied models. Factors such as F − (0.74), Na + (0.77), As (0.69), Mg 2+ (0.58) and HCO 3 (0.54) are more dominant, while SO 4 2 (0.2), pH (0.21), EC (0.31) and PO 4 2 (0.32) are less dominant in the LR model. The “mean decrease accuracy (MDA) method of RF algorithm” revealed that Na + (0.84), F − (0.77) and As (0.72) are the most influential factors on groundwater resources followed by HCO 3 (0.55), and Mg 2+ (0.54). In ANN, the dominant factors are Na + (0.88), F − (0.81), As (0.78) and HCO 3 (0.67), and the less dominant factors are SO42 (0.19), pH (0.24), salinity (0.31), and EC (0.33).
Evaluation assessment
All three models were evaluated using various metrics such as “sensitivity, specificity, NPV, PPV, ROC-AUC, Kappa-coefficient, and F-score”. Among the three models, the ANN model is the most suitable for modelling groundwater vulnerability, with a ROC value of 0.912 and 0.902, for training and validation, respectively. This is followed by the RF model with 0.817 and 0.792 for training and validation, and then the LR model with 0.749 and 0.712 for training and validation. The PPV and NPV are also high in the ANN model, with values of 0.883 and 0.885 in the validation stage. The sensitivity analysis showed that the ANN model had the highest result at 0.889, followed by RF and LR, with 0.782 and 0.721, respectively, in the validation stage. The Kappa and F-score also indicate that the ANN model is the best fit with values of 0.643 and 0.882 in the validation stage, followed by RF and LR (Table 4 ). The Taylor diagram in Fig. 5 also shows that the ANN is optimal based on standard deviation and correlation.
Quality assessment of groundwater
Piper, USSL, and Wilcox diagrams were used to assess the hydrochemical properties and quality of groundwater in the Ganges delta region. The Piper diagram (Fig. 6 a) showed that alkaline earth (Ca 2+ +Mg 2+ ) dominates over alkalies (Na+K) and that CI and NO 3 dominate over HCO 3 . The Wilcox diagram (Fig. 6 b) showed that two samples were unsuitable, while the others fell into the doubtful to unsuitable, permissible to doubtful, good to permissible and excellent categories. The USSL diagram (Fig. 6 c) revealed a high salinity, low sodium and alkali hazard dominance. The collected datasets were grouped and analyzed using a hierarchical clustering method, which showed that the second cluster, and to a lesser extent, the first cluster, significantly influenced the state and the groundwater quality. The dendrogram (Fig. 6 d) showed that the first cluster covered approximately 32% of the datasets, the second cluster covered the maximum dataset (47%) and the third cluster covered the lowest dataset (21%). | Discussion
The fundamental reasons for spatio-temporal fluctuations in the groundwater supply are increasing water demand across all sectors and changing climatic conditions 57 . These factors present a significant challenge to water resource planners. This study has demonstrated that considerable concentrations of elevated arsenic and nitrate in groundwater, as well as salinization, are among the groundwater quality issues in the coastal areas of the multi-aquifers of the Ganges delta. The quality of groundwater along the coast primarily depends on geological conditions, hydrogeological processes, and chemical activities 58 . Therefore, a trustworthy assessment of groundwater vulnerability is a crucial first step in choosing the best design or framework for future water resource development.
It can be challenging to select a preferred model for assessing inherent vulnerability that can effectively match the research topic's features and the study area's geo-environmental characteristics. The literature reveals that many academics have compared two or more vulnerability indices to create a meticulously tailored intrinsic vulnerability model for their research, aiming to achieve optimal output 59 , 60 . In the current scenario, statistical and machine learning (ML) algorithms are widely employed in groundwater-related studies worldwide. For example, Yu et al. (2022)Vu 61 applied an integrated Variable Weight Model (VWM) and DRASTIC model to assess groundwater vulnerability in China and found that the VWM-DRASTIC combination provided optimal predictive analysis. Vu et al. 62 used a numerical model and the index-overlay method in conjunction with climate scenarios (RCPs) to evaluate groundwater vulnerability and associated sustainability in Taiwan, and they recommended optimal predictive analysis. Furthermore, several machine learning models have been utilized in various groundwater-related studies, including groundwater vulnerability 4 , 14 , 18 , 63 , nitrate concentration in groundwater 64 – 66 , and more. The random forest (RF) model is well-known for its numerous advantages and has been employed in various geoscientific fields, including groundwater vulnerability studies. Lahjouj et al. 67 utilized the RF algorithm in a survey of groundwater vulnerability to nitrate concentration in Morocco and achieved an accuracy assessment of 0.822 in terms of AUC-ROC. Similarly, Saha et al. 18 used RF to assess hydrochemical-based groundwater vulnerability in parts of the Ganges delta and achieved optimal accuracy rates of 0.849 and 0.812 in the training and validation data of the ROC. Various statistical techniques are available, ranging from straightforward descriptive statistics of concentrations of specific contaminants to more complex regression analyses that consider the impacts of multiple predictor variables 68 . Binary logistic regression, sometimes known as logistic regression (LR), is a frequently used statistical technique for estimating groundwater vulnerability. LR models relate the potential influencing factors to the likelihood that a pollutant concentration will exceed a threshold value. Mohammaddost et al. 69 employed DRASTIC, EBF, and LR models in the Kabul basin of Afghanistan to assess groundwater vulnerability, and they found that LR provided 66% accuracy in AUC-ROC prediction analysis. Adiat et al. 70 applied LR for the same assessment in the Ilesa gold mining area of Nigeria, achieving an 85.7% accuracy in model prediction. Recently, with the significant advantages of neural network algorithms, several neural network models have also been used in groundwater studies. For instance, Elzain et al. 27 used the DLNN model in aquifer vulnerability studies in South Korea, while Elzain et al. 71 employed the RBNN model to assess groundwater vulnerability to nitrate contamination in the southern part of Korea.
Based on the discussion above and considering the significant advantages of statistical, machine learning (ML), and neural network algorithms, three popular learning algorithms, namely logistic regression (LR), random forest (RF), and artificial neural network (ANN), were selected for the optimal assessment of groundwater vulnerability in the mega delta of the Ganges delta, taking into account field-based hydrochemical parameters. The findings of this study demonstrate that among the applied models, ANN yields the most optimal results, with AUC-ROC scores of 0.912 and 0.902 in training and validation, respectively, for groundwater vulnerability studies. RF follows with scores of 0.817 and 0.792 in training and validation, and LR with scores of 0.749 and 0.712 in training and validation. The high performance of the ANN model can be attributed to its capacity for parallel processing, enabling it to handle multiple tasks simultaneously. The statistical analysis of all selected hydrochemical parameters reveals that pH and K + (0.952) and EC and Cl − (0.973) are highly correlated, while pH and salinity (0.546), pH and Mg 2+ (0.506), EC and Na + (0.644), Mg 2+ and K + (0.593), and Na + and Cl − (0.613) show moderate correlations. It is also found that pH, NO 3 − , As, and K + are the most influential factors for groundwater vulnerability in this study region.
Henceforth, studies on groundwater vulnerability serve as crucial measurements for the sustainable management of water resources, environmental preservation, and the guarantee of a secure and uncontaminated drinking water supply for both present and future generations.
Nonetheless, it is a fact that employing combined techniques and methodologies can aid in resolving ambiguities related to GIS-based vulnerability assessment frameworks in geoscientific fields. The approaches presented in this research can be tested in various hydrogeological and geo-environmental contexts to understand the spatial distribution of vulnerability. Evaluating groundwater vulnerability studies requires careful consideration of the data and tools used for validation. Furthermore, the limitations of this study are not considered various important factors, such as the hydrogeological process of groundwater, land use land cover, and aquifer and soil characteristics, as all of these factors affect groundwater quality. In the future, other neural networks and deep learning algorithms can be beneficial for the optimal assessment of groundwater vulnerability in the mega-delta, considering changing climate and land use land cover. Therefore, the results of this study will be valuable to land use planners and provide fundamental information for the optimal assessment and management of groundwater risk zones accordingly. | Conclusion
Globally, assessing susceptibility to groundwater contamination is crucial for proactive management aimed at safeguarding groundwater resources for various uses. Creating more effective sustainable development policies regarding potential groundwater pollution by utilizing more precise vulnerability maps. In the Ganges deltaic region, the high concentrations of contaminants, such as arsenic (As), are primarily responsible for groundwater vulnerability, and the associated human health hazards are a significant concern for global researchers. In the present research, there is a focus on creating an effective vulnerability map for a mega-delta, specifically the Ganges delta. This involves the application of LR, RF, and ANN models in the modelling and mapping process. Sensitivity analysis indicates that the ANN output is the most optimal, followed by RF and LR. The study reveals that the neural network algorithm is the best suited for assessing groundwater vulnerability related to contamination in the study region, surpassing traditional statistical analysis. Hydrochemical parameters such as pH, NO 3 − , As, and K + dominate this deltaic aquifer, contributing to vulnerability. Overall, all vulnerability maps indicate that the study area’s western, central, south, and eastern parts are highly vulnerable. Due to elevated levels of As and various ion contaminations, most groundwater samples from the Ganges delta are unsuitable for drinking and irrigation. Consequently, the improper implementation of government policies, a lack of awareness, and inadequate management are the primary concerns leading to groundwater deterioration in this region. Therefore, immediate action is necessary to sustain and conserve groundwater resources in the world's largest and most densely populated deltaic region. Henceforth, in future application of deep learning and both the dataset i.e., dry and wet season for sampling procedure will be helpful for better understanding of groundwater vulnerability in this vulnerable region. | Determining the degree of high groundwater arsenic (As) and fluoride (F − ) risk is crucial for successful groundwater management and protection of public health, as elevated contamination in groundwater poses a risk to the environment and human health. It is a fact that several non-point sources of pollutants contaminate the groundwater of the multi-aquifers of the Ganges delta. This study used logistic regression (LR), random forest (RF) and artificial neural network (ANN) machine learning algorithm to evaluate groundwater vulnerability in the Holocene multi-layered aquifers of Ganges delta, which is part of the Indo-Bangladesh region. Fifteen hydro-chemical data were used for modelling purposes and sophisticated statistical tests were carried out to check the dataset regarding their dependent relationships. ANN performed best with an AUC of 0.902 in the validation dataset and prepared a groundwater vulnerability map accordingly. The spatial distribution of the vulnerability map indicates that eastern and some isolated south-eastern and central middle portions are very vulnerable in terms of As and F − concentration. The overall prediction demonstrates that 29% of the areal coverage of the Ganges delta is very vulnerable to As and F − contents. Finally, this study discusses major contamination categories, rising security issues, and problems related to groundwater quality globally. Henceforth, groundwater quality monitoring must be significantly improved to successfully detect and reduce hazards to groundwater from past, present, and future contamination.
Subject terms | Study area
The Ganges and Brahmaputra delta, known as the Ganges delta, is one of the mega-deltas in the world, covering an area of approximately 105,000 km 2 . It consists of Bangladesh and parts of India’s state of West Bengal, formed by sedimentation of the Ganga, Meghna and Brahmaputra rivers at the Bay of Bengal during the late Holocene to recent times 20 . The delta stretches from 21° 10′ 42′′ to 24° 50′ 39′′ N latitude and 87° 30′ 21′′ to 91° 26′ 46′′ E longitude (Fig. 1 ) and has a shoreline of nearly 350 km along the Bay of Bengal. The Ganges delta has been divided into three parts from a geological perspective i.e., “Moribund delta, Active delta and Mature delta” 34 . The delta's stratigraphic section shows alternating sand-dominated and fine-grained phases with intricate interfingerings between them 33 . This delta enclosed by “Precambrian crystalline rocks” to the north and west and the “Assam-Arakan Neogene fold belt” to the east, signifies a comprehensive sedimentation history during the late Quaternary period 35 . Literatures indicates that numerous elevated terraces from the Pleistocene era are present both within and along the periphery of its alluvial plain 36 . Present evidence considering remote sensing supporting neotectonics activities in the Gangetic plain 37 . Salinity has impacted aquifers in the coastal regions of Bangladesh, reaching depths of up to 350 m, furthermore the salinity levels in the upper aquifers of the coastal region, reaching depths of 200–250 m, demonstrate notable fluctuations and experience abrupt changes over short distances 38 . The monsoon season (June–October) accounts for more than 80% of the annual rainfall, which ranges from 1500 to 2000 mm 39 . During the monsoon months, high rainfall and frequent tropical cyclones cause catastrophic flooding and saltwater intrusion in the land areas. The minimum seasonal temperature of the region varies from 12 to 24 °C, and the maximum ranges from 25 to 35 °C. The area has the largest population density compared to other deltaic regions due to the high soil fertility 30 . The Sundarbans, the world's largest mangrove forest, covers the southernmost part of this deltaic region also known as the Sunderban delta. Borehole data indicates that sediment primarily consists of sand and clay types. | Acknowledgements
This publication was supported by the Deanship of Scientific Research at the King Faisal University, Saudi Arabia (Grant: 3422).
Author contributions
A.S.: conceptualization, methodology, investigation, formal analysis, visualization, writing-original draft, writing—review and editing; S.C.P.: supervision, conceptualization, methodology, investigation, formal analysis, visualization, writing-original draft, writing—review and editing; A.R.M.T.I.: investigation, formal analysis, visualization, writing-original draft, writing—review and editing; A.I.: investigation, formal analysis, visualization, writing-original draft, writing—review and editing; E.A.: investigation, formal analysis, visualization, writing-original draft, writing—review and editing; M.K.I.: investigation, formal analysis, visualization, writing-original draft, writing—review and editing.
Data availability
The datasets used and/or analyzed during the current study are available from the reasonable request.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:56 | Sci Rep. 2024 Jan 13; 14:1265 | oa_package/30/f4/PMC10787756.tar.gz |
PMC10787757 | 38218904 | Introduction
Burning biomass, including wood, leaves, grass, and other materials, is likely to be a contributor to air pollution in many parts of the world 1 . The transition from a natural to an agricultural ecosystem resulted in the loss of 20–80 tons of carbon (C) per hectare, which had negative effects on the quality of water and soil as well as the productivity of biomass 2 . To maintain sustainable agriculture and environment, actions must be taken to reduce C losses from soil to atmosphere. Thermal carbonization is one of the suitable options for mitigation of climate change and minimize C losses. Pyrolysis and hydrothermal carbonization usually use high-temperature processes for transforming biomass into solid materials such as biochar (BC) and hydrochar (HC) 3 – 5 . The BC and HC have garnered global interest recently because of their considerable potential applications in various sectors, such as soil remediation, wastewater treatment, climate change mitigation, CO 2 capture, and energy storage 6 – 10 .The C storage potential of BC and HC is a result of their high recalcitrance, which reduces the amount at which the photosynthetic processes of biomass fix C and release it into the atmosphere 11 , 12 . Also, aromatic ring structures in BC and HC make them more stable and stubborn material against thermal and microbial degradation 13 , 14 . Uncontrolled open burning of biomass releases harmful gases, enhancing environmental pollution. Therefore, converting biomass to BC can sequester C for 1000 years 15 . On the other hand, HC can sequester C lower than that of BC 16 . It has been estimated that BC can sequester around 50% of the initial C 15 . Likewise, HC is also considered to be more stable than biomass 17 . The stability and recalcitrance of BC and HC are the most decisive factors that determine their C sequestration potential. Therefore, these properties of produced BC and HC depend on the type of feedstock, pyrolysis temperature, residence time, and pretreatment conditions 18 . The literature showed that the pyrolysis temperature, residence time, and heating rate are significantly correlated to pH, yield, surface area, fixed C, volatile matter, and ash content of biochar 18 . For instance, pyrolysis temperature is positively correlated with pH, surface area, microporous, and ash content, while having a negative relationship with yield, functional groups, and volatile matter 18 . Additionally, BC and HC could improve the water holding capacity, fertility of soil, soil nutrients, and productivity of agriculture, while reducing the greenhouse gases emissions. Current researches have indicated that BC and HC can be employed as inexpensive alternative sorbents to eliminate a range of different organic and inorganic contaminants from environments.
Typically, BC is produced from organic waste materials such as agricultural waste, municipal sewage sludge, and manure, etc. under limited oxygen supply during pyrolytic processes 19 – 22 , while Pauline and Joseph 23 mentioned that HC is produced by controlling the carbonization of biomass through thermochemical decomposition under pressure in hot compressed water at 180–300 °C for many hours. In terms of their physical and chemical characteristics, BC and HC (both chars) differ from one another, which makes them highly adaptable tools in a wide range of industries and environments. Despite these advantages, some disadvantages are associated with such materials. For instance, degradation of cellulose and lignin may generate phenolic chemicals such as PAHs and dioxins during the thermochemical decomposition of biomass, which could make the resultant material toxic and potentially pose a risk to the soil biota if applied as soil amendment 24 , 25 . Hence, to minimize the negative impacts of HC and BC on soil biota, it is crucial to examine the compositions of ecotoxicology of BC and HC before their application as soil amendments. Washing char materials with deionized water is a common practice to minimize toxicity potential 26 . Moreover, due to their surface characteristics and heterogeneous nature, BC and HC are not always successful 27 . Therefore, to enhance the physical and chemical characteristics of the charred materials, modification of the charred materials with other substances, such as acids, oxides, polymers etc. has been developed. Combining BC or HC with foreign materials may combine both materials' benefits, subsequently resulting in improved performance. However, in majority of cases, these materials could be either expensive, have a specific use 28 , could cause secondary pollution 29 , have a short half-life (ozonation) 30 , or are complicated to prepare 31 . Therefore, scientists are looking for cheaper, greener, and more efficient materials to modify BC and HC for better results and broad applications. Clay minerals are extensively employed in agriculture, industrial engineering, and for the exploration, extraction, and refinement of fuel. Some of the essential physical and chemical characteristics which contribute to making clay minerals desirable and beneficial are size of particle, particle shape, surface chemistry, and surface area 32 . Kaolinite clay can assist as a potential candidate for this purpose due to its cost-effectiveness and abundance. In kaolinite clay minerals, one tetrahedral sheet is joined to one octahedral alumina sheet via oxygen atoms as layered silicate minerals 33 . Kaolinites have surface exchange sites, but no exchange sites exist between the layers 34 . Hence, using kaolinite clay minerals to modify the surface BC and HC could create new composites that can improve the performance of such composites for applications of environment 27 , 35 , 36 . Combining BC or HC with kaolinite may improve the porous structure of the composite due to better distribution of the kaolinite particles on BC or HC matrix 37 . On the other hand, kaolinite is abundant, cheap, environment-friendly, and chemically more stable. Thus, the kaolinite-composited BC and HC could have superior possibilities for applications as sorbents and amendments to soil 38 – 40 . The previous studies showed that biochar composites with kaolinite significantly enhanced the treated soil’s organic matter, ammonical nitrogen content (NH 4 -N), and cation exchange capacity (CEC) compared to the control 41 . In addition, the soil's NH 4 -N, organic matter, and CEC increased with an increase kaolinite percentage in the biochar composites 41 . Furthermore, the plant root, shoot length, and biomass significantly increased compared to the control 41 . Another study by Qiu et al. 42 revealed that the lowest toxic Cd concentration and the highest stable Cd concentration were detected in the treated soil with kaolinite biochar composite. They also mentioned that the kaolinite also enhanced the stability of biochar. Therefore, we hypothesized that compositing biochar/hydrochar with kaolinite clay minerals might combine both materials' benefits, consequently resulting in a stable and applicable material for environmental and agricultural applications.
To the best of our knowledge, very limited researches have been attempted to combine and compare the composites of BC and HC with kaolinite. Therefore, the main aims of this research were to: (1) synthesize low-cost composite materials of BC and HC with kaolinite natural deposits via pyrolysis and hydrothermal carbonization, (2) characterize the synthesized materials for chemical, proximate, elemental, and structural properties, and evaluate stability, and C sequestration potential, and (3) explore the potential toxicity of the synthesized materials by analyzing PAHs compounds and investigating impacts on maize ( Zea mays L . ) seed germination. | Materials and methods
Synthesis of materials
Synthesis of kaolinite-biochar composite
Conocarpus waste was gathered from the King Saud University Campus in Riyadh, Saudi Arabia. The conocarpus waste was then washed, air-dried, ground, passed through a 1000 μm sieve, and termed as BM. Kaolinite deposits were obtained from the Al-Zobaira region in the Hael governorate (N: 27.916207, E: 43.711223). Kaolinite deposits were dried, ground, and washed with warm deionized water to remove gypsum. Later on, the kaolinite deposits were washed with deionized water and shaken many times for removal of soluble salts. Thereafter, the kaolinite deposits were dried at 105 °C for 4 h, and ground by grinder (Rotary Cup Mill (BICO), Sepor Company, India) to less than 100 μm size. The suspension of kaolinite was made by adding 0, 1, and 2 g of kaolinite powder to 500 mL of deionized water; the mixture was ultrasonically sonicated for 30 min with a frequency of 50 kHz (Ultrasonic sonicator Q700, Qsonica, Newtown, CT, USA). Subsequently, conocarpus BM (10 g) was immersed into the kaolinite suspension and stirred for 1 h. Following separation from the mixture, the kaolinite-BM was dried in an oven at 80 °C. Then, the kaolinite-BM was placed in a tightly sealed stainless-steel container (length: 22 cm and diameter: 7 cm), put in the muffle furnace (WiseTherm; Daihan Scientific, Gangwon-do, South Korea), and pyrolyzed at 600°C for 1 h under limited oxygen supply. The untreated feedstock was also used to prepare BC without kaolinite modification (i.e., pristine BC) with the same conditions of pyrolysis in the furnace. The pristine BC and kaolinite-BC composites were cooled and washed with deionized water many times for impurities removal, then dried in an oven, ground, sieved through a 100 μm screen, and stored in a container for further analyses. The synthesized pristine BC and kaolinite-modified BC composites with 10% and 20% kaolinite were henceforth referred to as BC, BCK10, and BCK20, respectively.
Synthesis of kaolinite-hydrochar composite
The dried kaolinite-BM mixture was used for the preparation of kaolinite-HC composite. 60 g of each kaolinite-BM mixture was added to 600 mL (1:10 v/v) DI water, stirred for 1 h, then placed in a tightly sealed stainless-steel container (length: 30 cm and diameter: 7 cm) and put in the muffle furnace (WiseTherm; Daihan Scientific, Gangwon-do, South Korea) at 200 °C for 6 h, then allowed to cool. The untreated feedstock was also used to prepare HC without kaolinite modification (i.e., pristine HC). After oven-drying, the pristine HC and kaolinite-HC composites were washed with deionized water many times for impurities removal, dried in an oven, ground in a mortar, sieved to pass 100 μm and stored in a container for further analyses. The synthesized pristine HC and kaolinite-modified HC composites with 10% and 20% of kaolinite were henceforth referred to as HC, HCK10, and HCK20, respectively.
Characterization of the synthesized materials
All synthesized materials, such as BM, BCs (BC, BCK10, and BCK20), and HCs (HC, HCK10, and HCK20) were conducted to various chemical, proximate, and ultimate analyses.
Yield, proximate and chemical analyses
Equation ( 1 ) was applied to estimate the yield percentage of the synthesized materials:
The proximate analysis of synthesized materials was carried out to measure the percentage of moisture content, ash, volatile matter, and fixed carbon 43 , 44 . The pH of the synthesized materials was measured at a 1:25 solid/water ratio 45 ; EC was determined in the same ratio extraction. The method of ammonium acetate extraction was conducted to determine the CEC values of the synthesized materials 46 .
Ultimate analysis
The CHNS analyzer (PerkinElmer series II, Walttham, USA) was used for the ultimate analysis (indicated by elemental composition) of the synthesized materials to measure C, nitrogen (N), hydrogen (H), and sulfur (S). Equation ( 2 ) was used to calculate the percentage of oxygen (O) in the synthesized materials:
According to the obtained results of elemental composition, the aromaticity and polarity index of the synthesized materials were also calculated using the elemental molar ratios of O/C and H/C.
XRD, SEM, FTIR, BET, TGA, zeta potential and hydrodynamic size analyses
X-ray diffraction (XRD) (MAXima X XRD-7000, Shimadzu, Japan) was employed to observe the mineralogical composition of the synthesized materials. The surface morphology and structural changes of synthesized materials were analyzed by capturing images on scanning electron microscopy (SEM) (SEM, EFI S50 Inspect, Netherlands). The aluminum stubs coated with adhesive carbon tape (12 mm; PELCO, UK) were used to spread the samples and then coated for 60 s with nano-gold particles using a 108 Auto/SE Sputter Coater (Ted Pella Inc. USA). The images were captured in a high vacuum at an acceleration voltage of 30 kV and a magnification of 3000. To analyze the functional groups of the synthesized materials, a Fourier Transform Infrared Spectrometer (FTIR, Bruker Alpha-Eco ATR-FTIR, Bruker Optics Inc) was employed. The Brunauer-Emmett-Taller (BET) method using a surface area and porosity analyzer (TriStar II 3020, Micromeritics, USA) was used to analyze the surface area, total pore volume, and pore size. Thermogravimetric analysis (TGA) was performed to display the weight loss of the synthesized materials (DTG-60H, Shimadzu, Japan). With temperature increases from 25 to 1000 °C, weight loss of the synthesized materials was observed. The zeta potential of the synthesized materials was measured using the techniques of dynamic light scattering by determining the electrophoretic mobility of 1 g L −1 for the various particle suspensions with Zetasizer (Zetasizer Nano ZS, Malvern, UK). Using Laser Doppler Velocimetery (Zetasizer Nano ZS, Malvern, UK), the average hydrodynamic size of the synthesized material particles was determined in aqueous suspensions.
Estimation of thermal stability
Harvey et al. 47 established the recalcitrance index (R 50 ), which they calculated using moisture and ash free TGA analytical data to quantify the materials' relative thermal degradability by following Eq. ( 3 ): where T 50,x and T 50,graphite are the temperatures of the moisture and ash corrected TGA thermograms of the synthesized materials and graphite, respectively (weight loss because of oxidation of C only), of which 50% of the weight is lost by oxidation or volatilization.
According to Harvey et al. 47 , Eq. ( 4 ) was used to correct the TGA thermograms for contents of moisture and ash: where W i,cor and W i,uncor represent, respectively, the corrected and uncorrected percent weight loss of the initial material, while W 200,uncor represents the initial material's weight loss in percentage up to 200 °C (which corresponds to water loss in the material). W cutoff,uncor is the weight loss at the temperature where no more oxidation occurred.
Materials can be categorized into three groups based on R 50 values 47 : R 50 ≥ 0.7 = highly recalcitrant 0.7 ˃ R 50 ≥ 0.5 = minimally degradable R 50 < 0.5 = highly degradable
Equation ( 5 ) provided by Zhao et al. 48 was used to calculate the percent carbon sequestration potential (CS). where C % material and C % feedstock are the percent carbon content of the material and feedstock, respectively.
PAHs analysis
The special extraction method provided by El-Saeid et al. 49 was used by adding 1 g of material and 1 mL of DI water to a 50 mL centrifuge tube, after a short vortex, the mixture was allowed to homogenize for roughly 10 min. Each material received 6 mL of acetonitrile, and the sample was shaken for 5 min. Individual materials were placed in the centrifuge tube, and the Mylar pouch's citrate salts (ECQUEU750CT-MP) were added. Then, materials were shaken immediately for at least 2 min, followed by a 5 min and then centrifugation at ≥ 3500 rcf. The cleanup of materials was executed by transporting a 1.5 mL aliquot of supernatant to a 2 mL CUMPSC18CT (MgSO 4 , PSA, C18) dSPE tube. Next, vortexing of materials was carried out for 2 min and then centrifuged for 2 min at high rcf (i.e., ≥ 5000). The supernatant solution was immediately transferred into a 1.8 mL GC vial using a 0.2 μm syringe filter. Finally, PAHs in the solution were estimated by GC–MS/MSTQD.
The EPA method was applied to adjusted the GC MS/MS conditions (SVOC 8270). For the procedure development of GC MS/MS, the auto SRM was carried out. The improved procedure was divided into quantifier and qualifier ions to acquire good sensitivity. With a maximum of 51 transitions per segment, scanning was successfully completed in each segment (500–700 MS). Prior to each batch of analysis, MS was automatically tuned, while nitrogen and argon were employed as collision gases, and helium gas was employed as a carrier gas from a registered Linde gas (SiGas, Saudi Arabia). A total of 17 PAHs were analyzed from all synthesized materials.
Germination test
This study complies with national and international regulation and legislation for the maize ( Zea mays L.) plant. The methods involved in this study are in accordance with the IUCN Policy Statement on Research Involving Species at Risk of Extinction and the Convention on the Trade in Endangered Species of Wild Fauna and Flora. The International Biochar Initiative recommends a quick and easy germination inhibition analysis to determine whether there are any unfavorable substances in the synthesized materials. A germination experiment was conducted by growing plants in Petri plates to measure the toxic impacts of the synthesized materials. Briefly, filter papers (Whatman 42) were used to germinate maize seeds. The phototoxicity of the synthesized materials was assessed by comparing their germination results with those of the control (without any amendment). Briefly, the Petri plate size was (90 mm × 12mm) and the filter papers were cut to fit the Petri plate. After that, the Petri plates were soaked in 5 mL of DI water. Then, 0.2 and 0.4 g of pristine BC, HC, and kaolinite-synthesized BC and HC (BCK10, BCK20, HCK10, and HCK20) were sprayed separately on the particular filter paper. Next, 10 maize seeds were added on each filter paper. Subsequently, the Petri plates were covered and put in the darkness for 48 h at 25°C. Then, for the next 11 days, there were cycles of 16 h light and 8 h darkness. The number of seeds that germinated in each treatment was used to calculate the percentage of germination rate 8 . Moreover, the fresh and dry weights of maize seedlings and shoot and root length were also measured. Triplicates of each treatment were carried out.
Equation ( 6 ) of the germination index (GI) is as follows:
Statistical analysis
Using the Statistics 8.1 program, the obtained data were statistically analyzed 50 . Descriptive statistics were used to calculate the average and standard deviation. The least significant difference (LSD) test was applied to compare the treatment means, with a significant level of 0.05. | Results and discussion
Characterization
Proximate and chemical analyses
The results of the chemical and proximate analyses of the fabricated materials are presented in Table 1 . The results revealed that the HC materials (HC, HCK10, and HCK20) had a higher yield than the BC materials (BC, BCK10, and BCK20). In addition, the highest yield was found in HCK20, while the lowest yield was found in BC. On the other hand, increased yield was associated with an increasing percentage of kaolinite, which was noticed in BCK20 and HCK20 compared to pristine. The yield order was in the order: HCK20 (64.35%) ˃ HCK10 (62.19%) ˃ HC (56.74%) ˃ BCK20 (36.84%) ˃ BCK10 (32.89%) ˃ BC (24.15%). Therefore, the HC materials showed a higher yield than the BC materials. The lower yield of pristine BC materials can be attributed to the higher volatile matter loss and a higher weight loss from biomass through pyrolysis than those of HC materials, which were made through hydrothermal carbonization in an airtight container. Hence, the higher yield of kaolinite-synthesized materials indicated more thermal stability than pristine. The highest yield of kaolinite-synthesized materials could be related to the highly resistant nature of kaolinite against thermal degradation 42 . The volatiles decreased with an increasing percentage of kaolinite and carbonization. This reduction was 3.94 times for BC, 4.13 times for BCK10, and 4.07 times for BCK20 than that of BM, while 1.44 times for HC, 1.67 times for HCK10, and 2.02 times for HCK20 than that of BM. Likewise, the fixed C was higher in pristine BC and HC as compared to BC and HC-based materials. The highest fixed C was observed in BC (70.20%), while the lowest was in BCK20 (25.59%) among the synthesized materials. In all synthesized materials, BC materials were more recalcitrant than HC materials 8 . The highest ash contents were found in BCK20, while the lowest were in HC. The ash contents increased with increased kaolinite percentage in synthesized BC and HC materials. The ash contents were increased by 2.43, 12.87, 18.15, 0.91, 6.35, and 9.46-folds in BC, BCK10, BCK20, HC, HCK10, and HCK20, respectively, as compared to BM, indicating the formation/condensation of the compounds of minerals in these materials through pyrolysis 51 . In comparison to HC-based materials, the increased ash contents in BC-based materials were related to the thermal oxidation of organic compounds during the pyrolysis process 8 . The BM displayed a higher moisture content of 2.73%, while it was higher in HCs (HC = 2.41%, HCK10 = 2.11%) and HCK20 = 1.46%) as compared to BCs (BC = 0.87%, BCK10 = 1.02%, and BCK20 = 0.41%). The HC materials had a higher moisture content due to being hydrothermally pretreated.
The finding displayed that the pH of BC materials elevated with an increasing kaolinite percentage during pyrolysis. In contrast, the pH of HC materials decreased with an increasing kaolinite percentage during hydrothermal carbonization. The highest pH value was 10.97 in BCK20, while the lowest was 4.48 in HCK20. The pH increased by 3.14, 4 and 5.7 units in BC, BCK10 and BCK20, respectively, compared with the BM, indicating the elimination of acidic functional groups and concentration of basic functional groups 52 . Furthermore, with increasing the pyrolysis temperature, the recalcitrant cationic species (Ca +2 , Mg +2 , Na + ) condensed in BC materials, which could also cause increased pH 53 , 54 . On the other hand, the pH values of the HCK10 and HCK20 were decreased by only 0.09 and 0.83 units, respectively, as compared with BM, indicating the minimal removal of basic functional groups 52 . The pH value of the HC increased by 0.91 units compared with the BM. Likewise, the EC of all materials decreased with pyrolysis and hydrothermal carbonization, which could be due to washing all the materials before analysis to remove surface basicity 55 , as well as the dissolved salts released into the liquid phase during hydrothermal carbonization 56 . The highest cation exchange capacity (CEC) was shown in HC (132.8 cmol kg −1 ), while the lowest CEC was in BC (16.3 cmol kg −1 ). With pyrolysis, the CEC was increased by 34.97% for BCK10 and 38.04% for BCK20 as compared to pristine BC, which is attributed to an increase in surface functional groups 57 . Contrarily, with hydrothermal carbonization, the CEC was decreased by 7.38% for HCK10 and 6.02% for HCK20 compared with HC. Nevertheless, the CEC for HC materials was several times higher than that for BC materials due to more oxygen-containing functional groups on the surface of HC materials 58 – 61 .
X-ray diffraction, SEM, and FTIR analyses
The XRD spectra of the BM, kaolinite deposits, and synthesized materials are shown in Fig. 1 . The various visible peaks on the spectra of all the synthesized materials demonstrate the presence of crystalline minerals and inorganic materials. The XRD patterns of raw materials, such as kaolinite clay deposits and BM, were identified. In the XRD pattern of kaolinite deposits, the strongest intense peaks of kaolinite in XRD pattern of kaolinite deposits were identified at 2θ = 12.46°; 25.06° and 26.68° (Fig. 1 a) 62 – 64 and lower diffraction kaolinite peaks than previous peaks were revealed at 2θ = 36.7°; 39.50°; 42.56°; 50.3°; 55.16° and 62.4° 65 – 70 . Four peaks of cellulose in BM are displayed at = 21.8; 22.4°; 24.3° and 30°, peak of carbon-containing minerals mellite is at 2θ = 14.8°, and calcite at 2θ = 39.78° (Fig. 1 a) 71 – 74 . The kaolinite peaks were found in BC-based materials and HC-based materials, which endorsed successfully implanted kaolinite onto the BC and HC matrix. Peak shifting is a sign of interactions between kaolinite and BC or HC during synthesizing the composite. Peaks of kaolinite at 2θ = 20.7°; 26.68°; 39.5° and 50.3° in BCK10 and BCK20 were shifted to 20.78°; 26.6°; 39.26° and 50°, respectively, and 2θ = 25.06° in BCK10 was shifted to 25.2°, and 2θ = 36.86° in BCK20 was shifted to 36.48° (Fig. 1 b) 64 , 66 , 68 , 75 . Similarly, peaks of kaolinite at 2θ = 12.46°; 26.68°; 36.1°; 38° and 42.56° in HCK10 and HCK20 were shifted to 12.26°; 26.62°; 36.14°; 38.32° and 42.46°, respectively, and 2θ = 39.5° in HCK10 was shifted to 39.44°, and 2θ = 55.1° was shifted to 54.78° (Fig. 1 b) 64 , 67 , 76 , 77 . The other peaks in BC-based materials were impurities corresponding to calcite and quartz; and cellulose and mellite for HC-based materials. The XRD analysis of BC (Fig. 1 b) displayed peaks at 2θ = 23.04°, which indicated the presence of graphite 78 . Other peaks were identified at 2θ = 29.3°; 39.44°; 43.1°; 47.34° and 48.28°, and indicated the presence of calcite 52 , 79 , 80 . Likewise, the peak of cellulose in HC is at 2θ = 22.4° 72 and the peak of mellite is at 2θ = 14.8° (Fig. 1 c). In BC-based materials, mellite was lost during the pyrolysis process of BM, while it still exists with HCs. Therefore, the changes in the results of the kaolinite-synthesized BC and kaolinite-synthesized HC showed that the synthesis method used successfully implanted kaolinite onto the BC and HC matrix.
SEM images in Fig. 2 depict the surface morphology of the synthesized materials. SEM images are highly useful to obtain minute details about the structure of synthesized materials and their modifications. In addition, comparing pristine BC and HC with their modified and raw materials would therefore allow us to make judgments on morphological changes during pyrolysis and hydrothermal carbonization. Pyrolysis and hydrothermal carbonization converted the crystalline surface of BM (Fig. 2 a) into porous and amorphous materials, as presented in Fig. 2 b–g. The surface area of the BC-based materials and HC-based materials was generally coated by thin film structures and were more irregular on the BC and HC surfaces (Fig. 2 c–g), indicating that after zooming in at 3000 × magnification, the kaolinite well onto the surfaces and within the pores of BC and HC 81 . The decomposition and volatilization of biomass caused a small number of pores with different sizes to appear in pristine BC and HC. In addition, the SEM images showed that kaolinite was not entirely covering the surfaces of the BC-based materials 82 , 83 , while kaolinite was entirely covering the surfaces of the HC-based materials.
According to Li et al. 84 , the surface functional groups, particularly O-containing functional groups, may facilitate the chemical adsorption of BC and HC. Figure 3 displays the FTIR spectra of synthesized materials in the range of 400–4400 cm −1 . A broad band has been explored at 3300–3800 cm −1 in BM, representing the presence of O–H bonding 85 , which continued to appear during hydrothermal carbonization and vanished during pyrolysis. The structural–functional groups found in the BC and HC materials included C=C, O–H, C–O, C=O, C–H, Si–O–Al, Si–O–Si, Si–O, N–H, C–OH, CH 2 and C–N. It is evident that some functional groups, such as C–H and C=O (between 1413 and 1462 cm −1 ) were shared by BC and HC materials 86 , C–O and O–H (1033 cm -1 ) 87 . For BC-based materials, some peaks appeared with increasing kaolinite, such as; Si–O–Si groups at 465 and 469 cm −1 88 , Si–O at 791 cm −1 89 and C–O, C–N at 1083 cm −1 90 , 91 . Therefore, those peaks indicated that kaolinite was successfully loaded onto the BC matrix 92 . The HC materials showed more bands than BC materials, which could be due to minimal losses of functional groups. Likewise, some peaks appeared with increasing kaolinite, such as Si–O–Si at 469 cm −1 85 , Si–O and Al–O vibrations at 762, 696 and 539 cm −1 93 , Si–O at 784 cm −1 94 . On the other hand, the same peaks appeared in HC, HCK10, and HCK20 composites, such as; C–O and O–H at 1033 cm −1 87 , C–O–C at 1111 cm −1 95 , CH 3 at 1440 cm −1 96 , C=O at 1510 cm −1 , C=O at 1510 cm −1 97 , COOH at 1700 cm −1 98 , C–H at 2921 cm −1 99 , and O–H (between 3300 and 3800 cm −1 ) 82 . Moreover, some bands were not found in BC and HC alone, but when composited with kaolinite, such bands were visible. Therefore, this finding can further assist in predicting the impact of kaolinite-synthesized BCs and HCs on the removal efficiency of these composites for various pollutants.
Size and surface characteristics
The BET surface area, pore size, and pore volume of the BM and synthesized materials are shown in Table 2 . BC materials have a higher surface area than HC materials. The highest surface areas were found in BC (290.89 m 2 g −1 ), and the lowest in HC (5.32 m 2 g −1 ). Moreover, it was detected that the surface areas of the BCK10 and BCK20 (225.14 and 180.40 m 2 g −1 ) were reduced compared to the pristine BC (290.89 m 2 g −1 ), indicating that the pores on the BC might have been covered/clogged by kaolinite 39 . Conversely, the pristine HC suffered agglomeration and revealed a lower surface area 97 , 100 . Therefore, the surface area of the HCK10 and HCK20 was higher than that of HC. The surface area of the HCK10 and HCK20 was 16.11 and 15.44 m 2 g −1 , which was threefold higher than the pristine HC (5.32 m 2 g −1 ). The composite interposes retained the kaolinite particles, increasing the surface area of kaolinite-HC 43 , 101 . Contrarily, the BCK10 and BCK20 pore sizes were 38.31 and 28.17 Å, which were higher as compared to BC (28.81Å). On the contrary, the pore sizes of HCK10 and HCK20 were 145.90 and 175.92 Å, which were lower as compared to HC (187.71 Å). The highest pore sizes was appeared in HC (187.71 Å), while the lowest was in BCK20 (28.17 Å). However, with increasing the amount of kaolinite deposit, the surface area decreased by 22.68% in BCK10 and 38% in BCK20 compared with pristine BC, while kaolinite addition increased the surface area of HCK10 by 203% and HCK20 by 190% as compared to pristine HC. Overall, the pore size of the HC materials was more than 10 times larger than that of the BC materials. The previous studies showed that the low surface area of zeolite and silica-composited BC is due to plugging of pores in the existence of minerals 51 . Yao et al. 35 also mentioned that the blockage of BC pores by clay minerals particles could be the cause of the decreased surface area of clay biochar composites. The hydrodynamic size of the particles of the synthesized materials in aqueous suspensions was determined (Table 2 ). Any particle aggregate or particle with an equivalent diameter is seen by Dynamic Light Scattering (DLS) to reflect a similar size. The average size of particles (hydrodynamic size) was 2.63 μm for BC, 2.34 μm for BCK10, 2.73 μm for BCK20, 2.19 μm for HC, 2.38 μm for HCK10, and 3.10 μm for HCK20. These results of particle size analyses suggested that HC showed the minimum particle size, while HCK20 showed the maximum particle size.
Colloidal dispersions have an electrokinetic potential known as zeta potential, and the value of this potential is influenced by the surface charge of the individual particles. In our study, the zeta potential values of the synthesized materials were determined as a function of the solution pH (Table 2 ). The zeta potential of the pristine BC and BC-based materials ranged from − 25.06 to − 25.08 mV; however, the zeta potential of the pristine HC and HC-based materials ranged from − 21.68 to − 24.98 mV. The highest zeta potential was shown in BCK10 (− 25.08 mV), while the lowest was in HCK20 (− 21.86 mV). Consequently, the negative charge of pristine BC and BC-based materials is slightly higher than that of pristine HC and HC-based materials. Hence, the zeta potential of all the synthesized materials was negative, indicating that all surfaces of the synthesized materials are negatively charged. Therefore, a larger zeta potential means a larger negative charge, which is beneficial for remediation, especially with removing cationic ions.
Elemental composition and carbon stability
The elemental composition of the synthesized materials is presented in Table 3 . In comparison to BM, thermal treatment enhanced the total C contents of BC and HC materials. An increased degree of carbonization may be the cause of rising C contents with pyrolysis and hydrothermal carbonization. The higher contents of C were observed in BC, and the lowest was observed in HCK20 among the synthesized materials. HC materials had higher H contents ranging from 5.33 to 5.54%, while BC materials ranged from 1.08 to 1.41%. Among the synthesized materials, the maximum N contents were in BCK20, while the minimum was in BC. However, the C contents of the BC, BCK10, BCK20, HC, HCK10, and HCK20 were increased by 47.8%, 44.1%, 43.6%, 18.3, 20.9%, and 17.8%, respectively, as compared to BM. The C content was higher in pristine BC, which decreased with kaolinite modification in BC-based materials, such as 6.59% in BCK10 and 7.32% in BCK20. Meanwhile, the C content was increased in HCK10 (3.37%) and slightly decreased in HCK20 (0.52%) compared to HC. The increasing lignin content in biomass has been mentioned to enhance carbonization and increase the content of C in BC 102 , 103 , 105 . Other studies reported that cellulose and hemicelluloses also significantly affect on the C content 104 , 105 . On the other hand, a reduction of H contents in BC materials was more than that of HC materials. Compared to BM, the reduction of H contents for BC, BCK10, BCK20, HC, HCK10, and HCK20 was 76.3%, 78.4%, 82.0%, 10.8%, 7.3%, and 9.9%, respectively. According to previous studies, Gai et al. 106 reported that the decrease in H content of BC was due to the loss of water, gaseous H 2 , hydrocarbons, and tarry vapors. The total N contents decreased with pyrolysis and hydrothermal carbonization in BC (27.3%), HC (3.1%), and HCK10 (11.9%), while it increased in BCK10 (21.5%), BCK20 (32.6%), and HCK20 (18.9%) compared to BM. Furthermore, the contents of N increased with increased kaolinite modification of BC and HC, which were higher in BCK20 and HCK20. The total O contents decreased with pyrolysis and hydrothermal carbonization of biomass in all materials. The reduction in O was observed in BC (97.1%), BCK10 (92.5%), BCK20 (94.5%), HC (27.1%), HCK10 (30.8%), and HCK20 (29.6%), compared to BM. Moreover, BC-based materials showed higher O contents compared to pristine BC, while HC-based materials slightly presented lower O contents compared to pristine HC. Dehydration, volatilization, and depolymerization could be the causes of the decrease in O contents with hydrothermal carbonization and pyrolysis 107 .
Figure 4 illustrates the Van Krevelen diagram, frequently used to calculate the molar O/C and H/C ratios to compute the BC and HC material’s recalcitrance. With pyrolysis and thermal carbonization, conocarpus waste biomass is dehydrated and depolymerized to produce smaller dissociation products 24 , 108 . Reduced H/C and O/C molar ratios showed reduced polarity and a greater degree of aromaticity, which in turn increased the stability of BC 55 . Also, another research mentioned that the higher stability of BC composites is attributed to greater polyaromatic carbon content compared to HC composites 109 . From Table 3 and Fig. 4 , the surface polarity index (i.e., indicated by O/C molar ratio) of BC and HC materials decreased with pyrolysis and hydrothermal carbonization compared with BM, indicating a decrease in the hydrophobicity of these materials. Overall, the highest values of O/C and H/C (0.604 and 1.47) were found for the BM. Among the synthesized materials, maximum values of O/C and H/C (0.36 and 1.09) were expressed by HCK20 and HC, respectively. In any case, BC-based materials showed slightly decreased H/C values and increased O/C values compared to pristine BC. In contrast to pristine HC, HC-based materials showed slightly increased H/C values and decreased O/C values. The O/C ratio of BC (0.01), BCK10 (0.025), and BCK20 (0.02) was less than 0.2, suggesting more stability of such materials and a half-life of more than 1000 years 110 . In contrast, the O/C ratio of HC (0.360), HCK10 (0.331), and HCK20 (0.350) was between 0.2 and 0.6, suggesting a half-life ranging from 100 to 1000 years 110 . For BM, the O/C ratio was 0.604, which will probably possess a half-life of less than 100 years 110 ). Likewise, low H/C values of BC and HC materials compared with BM indicated high aromaticity and reactivity 111 . In addition, the much lower H/C ratio of BC materials compared to HC materials demonstrated that BC materials were heavily carbonized and showed highly aromatic structures. Depending on IBI 112 criteria, the H/C < 0.7 showed greater combined aromatic ring structures for the BC materials, while the H/C molar ratio of HC materials is higher than that of BC materials, which was ˃ 0.7. Therefore, BC materials showed higher aromaticity and low polarity than that of HC materials. Hence, the ratios of H/C and O/C in the BC-based materials and HC-based materials indicated more aromatic C and becoming less hydrophilic 113 .
TGA and recalcitrance index (R 50 )
In this study, TGA thermogravimetric analysis was applied to investigate the long-term stability of synthesized materials. The results of TGA-DTG analysis are shown in Fig. 5 a,b. Our results indicated that the BM thermally decomposed earlier than the HC and BC materials due to thermal instability. The thermogravimetric curves of the different BC materials and the HC materials exposed similar performances as regards weight loss (%) on a reducing trend with rising temperature. For BC, BCK10, and BCK20, the sudden weight loss began at 650 °C ≈ 700 °C, at 350 °C ≈ 400 °C for HC, HCK10, and HCK20, and at 250 °C ≈ 350 °C for BM. The thermograms displayed two general regions where weight loss has occurred: (i) around 300°C for BM and HC materials because of thermal degradation of cellulose and hemicellulose compounds 114 and (ii) around 600–1000 °C because of lignin degradation 51 . The order of materials for degradability is as follows: BCK20 < BCK10 < BC < HCK20 < HCK10 < HC < BM. Therefore, the BC and HC with a 20% kaolinite ratio were more stable, followed by a 10% kaolinite ratio, followed by pristine. Moreover, a higher weight loss was observed in HC materials Than in BC materials.
The recalcitrance of the synthesized materials in soil depends on the potential of BC and HC materials to resist thermal, physical, and chemical degradation. The formation of organometallic complexes and the aromatic carbon structure play significant roles in BC and HC materials. Due to its increased aromaticity, the C of BC materials has a higher potential for recalcitrance than that of HC materials and BM; however, the recalcitrance and stability of C in the soil after adding BC and HC materials vary depending on the kind and feedstock composition, the texture and structure of the soil, and other environmental factors. Hence, the recalcitrance index of BC and HC materials is required to be determined in relative to graphite, because one of the most stable C forms is graphite 115 . As a result, TGA thermograms that are marked as R 50 were used by Harvey et al. 47 to create a novel recalcitrance index that predicts the materials' potential for recalcitrance.
To calculate the recalcitrant index (R 50 ), TGA thermograms for all materials were corrected for moisture and ash contents, which reflect the stability of synthesized materials and the extent of C sequestration 116 . Figure 5 c,d display moisture and ash-corrected TGA thermograms. Therefore, the R 50 values of BM and HC in this study were 0.40 and 0.42, indicating that these are highly biodegradable and belong to class 3 (Table 3 ). BC, BCK10, and BCK20 were considered in class 1 with R 50 of 0.78, 0.81, and 0.79, respectively, indicating that these have high recalcitrance potential 47 . The R 50 values of HCK10 and HCK20 composites were 0.50 and 0.51, suggesting they were minimally degradable. Increased R 50 in the BCK10 and BCK20 composites could be attributed to kaolinite's presence, which improved BC's oxidation resistance. The previous study by Ahmad et al. 51 demonstrated that the higher R 50 in silica BC composites could be related to the probable protection of C by silica particles, which may be controlled through the pyrolysis process. The reason was also attributed to the change from the aromatic C–C/C=C functionality to the C–O and C–H configuration of the BC surface forming stable organo-mineral complexes (i.e., C–O–Al and C–O–Si) with clay minerals 117 . The previous study by Wang et al. 115 showed that the R 50 increased by 0.89 for kaolinite-composited BC, reflecting that minerals-composited BC could improve the thermal stability of BC. According to Ahmad et al. 111 , changes in structural arrangements within the silica-BC complex may be responsible for the high recalcitrance and C sequestration potential of silica-composited BC. Therefore, increased R 50 values of BC materials expect greater recalcitrance and stability because of the particular interactions of kaolinite with the BC matrix.
Although it provides a range of recalcitrance relevant to graphite (a highly stable C form), the recalcitrance index (R 50 ) does not provide information on the specific timing for C sequestration. Consequently, the potential of C sequestration (CS) of BC and HC materials has been calculated using R 50 , yield, and C contents. A higher CS percentage relies on (i) the yield (%), (ii) the contents of C (%) before and after pyrolysis and hydrothermal carbonization, and (iii) the R 50 value. In our study, the CS of the BC materials ranged from 46.43% to 51.63%, while the HC materials showed CS in the range of 29.16%–39.94%. The highest CS value was found in BCK20 (51.63), while the lowest was in HC. On the other hand, increased CS values were associated with an increasing percentage of kaolinite in BC and HC, which were 51.63% in BCK20 and 39.95% in HCK20 compared to pristine. Ahmad et al. 111 revealed that the silica presence in BC-based materials enhanced the C sequestration potential through modifications to the silica-BC complex's structural arrangements. Another study by Sewu et al. 118 indicated that the bentonite-BC materials had higher C sequestration potential than the original BC. Therefore, in our study, forming Si–O–C and Al–O–C in BC-based materials and HC-based materials possessed the highest C sequestration potential.
Toxicity evaluation
Contents of PAHs
Biomass combustion is one of the primary sources of anthropogenic PAHs 119 . The total PAHs quantities and proportional contributions of individual PAHs provide valuable data regarding the quality of synthesized BC and HC materials regarding environmental safety. The contents of PAHs in BC and HC materials were determined to identify the PAHs retention in all materials due to various processes of pyrolysis and hydrothermal carbonization, and the findings are presented in Table 4 . Pristine HC showed higher amounts of all PAHs than in pristine BC. Acenaphthene content was found to be around 2.6 times greater in pristine HC than pristine BC. Moreover, around 2 times greater phenanthrene, anthracene, and benzo [e] pyrene [BeP] contents, about 1.5 times greater naphthalene and acenaphthylene contents, and 1.3 times greater fluorine, fluoranthene, pyrene, and retene contents were detected in pristine HC in comparison to pristine BC. These findings support previous studies showing that pristine HC contains more PAHs than pristine BC 8 , 120 . Compared to dry pyrolysis, the tar condensation on the surface of HC through hydrothermal carbonization of biomass might have resulted in the retention of PAHs in such materials 58 . In contrast, the modification of BC and HC with kaolinite decreased the PAHs contents as compared to pristine BC and HC. The HC-based materials such as HCK10 and HCK20 showed a greater reduction of PAHs contents than the BC-based materials such as BCK10 and BCK20. Similarly, increasing kaolinite modification means more reduction of PAHs, such as in BCK20 and HCK20, which were more PAHs-reducing than BCK10 and HCK10. Generally, PAHs contents in HC-based materials were lower than in BC-based materials. Compared to pristine BC and HC, the reduction for these materials significantly ranged between 1.2 and 6.4 times for most of the PAHs contents. For instance, the reduction for phenanthrene was about 5 times for HCK10 and 6.4 times for BCK20 compared to HC. Similarly, the reduction of naphthalene was about 1.2 times for BCK10 and 2.6 times for BCK20 compared to pristine BC; likewise, about 2.8 times for HCK10 and 3.8 times for HCK20 as compared to pristine HC. Overall, ten forms of seventeen PAHs were detected in the BC and HC materials. The sum of the PAHs total concentrations of BC and HC materials ranged from 739.1 μg kg −1 in HCK20 to 2770.7 μg kg −1 in HC. The order of the ∑PAHs contents was HC (2770.7 μg kg −1 ) ˃ BC (1752.9 μg kg −1 ) ˃ BCK10 (1514.1 μg kg −1 ) ˃ HCK10 (972.8 μg kg −1 ) ˃ HCK20 (739.1 μg kg −1 ) ˃ BCK20 (934.4 μg kg −1 ). Nevertheless, ∑ 15 PAHs contents (∑16 US-EPA PAHs, except Benzo[b] Flouranthene) were in the BC (1487.8 μg kg −1 ), BCK10 (1293.3 μg kg −1 ), BCK20 (799.5 μg kg −1 ), HC (2383.2 μg kg −1 ), HCK10 (861.71 μg kg −1 ), and HCK20 (619.5 μg kg −1 ). Therefore, ∑15 PAHs contents in BC and HC materials were lower than the threshold value of ∑16 PAH (6000–20,000 μg kg −1 = 6000 μg kg −1 ) provided by the IBI 112 . Therefore, the PAHs compounds in BC and HC materials are considered safe for use as soil amendments and have the lowest potential risk for PAHs-related effects.
Based on the aromatic rings number, PAHs with 3 rings were predominant in BC and HC materials, followed by those with 4, 2, and 5 rings (Fig. 6 ). Large proportion of low-molecular-weight 3 ring PAHs in BC and HC materials may be because of their atmospheric emission during pyrolysis and retention in the condensed tar through hydrothermal carbonization. Comparatively, pristine BC contained higher proportions of 2 and 4 ring PAHs than pristine HC, while pristine HC contained higher proportions of 3 and 5 ring PAHs. In BC and HC-based materials, the proportions of 2 and 5 ring PAHs were 17% and 3% in BCK10, BCK20, HCK10, and HCK20, which were equal (except 2 ring PAHs in BCK20 which, was 13%). The proportions of 3 ring PAHs were ordered as BCK20 (67%) ˃ HCK10 and HCK20 (64%) ˃ BCK10 (61%). Likewise, the proportions of 4 ring PAHs were ordered as BCK10 (19%) ˃ HCK10 and HCK20 (17%) ˃ BCK20 (16%). Generally, the proportion of 3 ring PAHs was the highest (ranging from 59 to 67%) in all synthesized materials, while the proportion of 5 ring PAHs was the lowest (ranging from 2 to 3%). During pyrolysis, they are released into the atmosphere while being retained in the condensed tar during hydrothermal carbonization. Low molecular weight 3 ring PAHs may be present in significant concentrations in HC. Previous studies mentioned that the toxicity and PAHs concentration in BC decreased significantly with increasing temperature, which can be attributed to the evaporation with rising temperature 121 – 123 . In addition, the most important factors in the elimination of PAHs compounds are volatility and thermal degradation 124 . Washing the BC and HC materials could reduce the negative impact on plant growth due to the reduced PAHs compounds 125 . Hence, kaolinite-composited BC and HC could be better options for amendments to soil and water due to decreasing PAHs contents.
Biochar (BC), BC with 10% kaolinite enrichment (BCK10), BC with 20% kaolinite enrichment (BCK20), Hydrochar (HC), HC with 10% kaolinite enrichment (HCK10), HC with 20% kaolinite enrichment (HCK20).
Maize germination and growth
The germination test results of maize ( Zea mays L.) are presented in Fig. 7 a. In control (CK) treatment, 75% germination of maize was detected. Seed germination was significantly affected by the application of BC and HC materials. The highest germination was observed in the HC (0.2 g) treatment, which was 85%, while the lowest was for HC (0.4 g) and HCK20 (0.2 g) treatments, which were 55%. In addition, there were no significant differences between 0.2 and 0.4 g applications of BC, BCK10, BCK20, and HCK10. The order of the germination percentages was as follows: HC (0.2 g) ˃ CK, BC (0.2 and 0.4 g), BCK10 (0.2 and 0.4 g), BCK20 (0.2 and 0.4 g), HCK10 (0.2 and 0.4 g), HCK20 (0.4 g) ˃ HC (0.4 g), HCK20 (0.2 g). Our finding showed that the previously mentioned synthesized materials are safe when applied to soil to enhance plant production without phytotoxic effects. Our finding supports the previous studies, which mentioned that the application of HC mixed with kaolinite could mitigate greenhouse gas emissions and might support improved soil retention of C and N for better management of agricultural nutrients 101 . Likewise, another study confirmed that applying a clay-BC composite positively impacted on the yield and quality of blue grass and improved soil properties 126 .
To evaluate the effect of various treatments, the shoot and root length of the maize seedling were also determined, and the results are shown in Fig. 7 b. Compared to CK treatment, all treatments significantly improved the maize growth (except BC, 0.2 g). Shoot length of maize seedlings for BCK20 (0.2 and 0.4 g), HC (0.2 and 0.4 g), HCK10 (0.2 g), and HCK20 (0.2 g) treatments was 54.63%, 60.62%, 65.21%, 61.55%, 53.56%, and 64.20%, respectively, higher than CK. On the other hand, other treatments such as BC (0.4 g), BCK10 (0.2 and 0.4 g), HCK10 (0.4 g), and HCK20 (0.4 g) also increased the length of the shoot of maize seedlings by 36.11%, 25.62%, 40.60%, 24.36%, and 41.18%, respectively, as compared to CK. These findings confirm the potential of the synthesized materials to improve plant growth. Likewise, HC (0.2 and 0.4 g) and HCK10 (0.2 g) significantly increased the root length of maize seedlings by 89.27%, 90.80%, and 88.29%, respectively, compared to CK. Moreover, the root length of the other treatments was lower than previous treatments but higher than CK, which were 27.16% in BC (0.2 g), 38.71% in BC (0.4 g), 62.07% in BCK10 (0.2 g), 51.08% in BCK10 (0.4 g), 41.48% in BCK20 (0.2 g), 53.16% in BCK20 (0.4 g), 51.96% in HCK10 (0.4 g), 60.31% in HCK20 (0.2 g), and 52.06% in HCK20 (0.4 g).
Figure 7 c,d depict maize seedlings’ fresh and dry weights as influenced by synthesized materials. The HC (0.2 g) treatment enhanced fresh and dry maize weights by 112.49% and 105.40%, respectively, compared to CK, while the other treatments were lower than HC (0.2 g) treatment. Hence, all the treatments significantly increased the fresh and dry weights, but in varying proportions compared to the CK treatment. Mumme et al. 127 showed that biochar-zeolite composites increased the germination rate compared to the non-amended treatments. Another study by Medha et al. 41 showed that Sorghum grass root and shoot length significantly increased than CK. Furthermore, among the tested materials, with the increase in the bentonite biochar composite and kaolinite fractions, the shoot and root length substantially increased by five and ten folds, respectively. In a study about the effect of biochar on the physiological growth of maize. Cong et al. 128 found that biochar increased the dry biomass of maize by 22.22% against the control condition while up to 58.39% increase in plant height was noted against control with no biochar amendment. In another study, Yang et al. 129 stated that kaolinite composite with walnut shell derived biochar improved its surface and adsorptive characteristics and nutrient release capacity, which ultimately positively impacted plant growth. Fregolente et al. 130 found positive impact of hydrochar application on root and shoot development and dry biomass production of maize. Therefore, the parameters of seed germination and seedling growth revealed positive impacts of the synthesized materials with varying proportions. Consequently, it is highly recommended to properly assess the BC and HC materials before applying them to the soil as amendments.
The correlation between the characteristics of synthesized materials such as R 50 , CS, O/C, CEC, SA, and ∑ total PAHs toxicity and maize germination and growth were established (Table 5 ). The results of germination and growth parameters were taken by the average of additives rate. The relationship revealed a significant correlation between maize germination and growth and the characteristics of synthesized materials (R 50 , CS, O/C, CEC, SA, and PAHs). The significant relationships (r) between germination and characteristics of synthesized materials of R 50 , O/C, CEC, and SA were 0.75, − 0.78, − 0.78, and 0.81, respectively. Likewise, the r values between shoot length and R 50 , O/C, CEC, and SA were − 0.52, 0.54, 0.55, and − 0.71, respectively. Similarly, the r values between root length and R 50 , CS, O/C, CEC, and SA were − 0.78, − 0.83, 0.76, 0.78, and − 0.80, respectively. On the other hand, the r value between fresh weight and ∑ total PAHs was 0.53, while it was -0.50 between dry weight and SA. According to our findings, the characteristics of the synthesized materials have positive relationships on maize growth and germination.
Implications of the study
In recent years, fabricated BC and HC materials have received a lot of interest. Although a few studies have examined the feasibility of clay-supported BC and HC as composites in different applications, evidence from germination and growth parameters is still very limited. In our study, pristine BC, HC, and kaolinite-composited BC and HC were synthesized and characterized. The synthesized materials can be used successfully to sequester C, immobilize inorganic and organic pollutants in water and soil, increase N and phosphorus availability, and increase plant production at the field scale. Our founding materials, BC-based materials, could be used for long-term stability and sequestering C due to their high recalcitrance potential compared to HC-based materials, which had an R 50 > 0.7 and CS > 47.63%. The higher stability and C sequestration of these charred materials could help mitigate climate change. Furthermore, synthesized materials can be used successfully for water and soil remediation; BC-based materials have high surface area and zeta potential, while HC materials have high CEC and surface functional groups. Moreover, kaolinite-composited BC and HC can be used for plant production, which showed no toxicity in the germination test and the lowest potential risk for PAHs-related impacts. On the other hand, HC-based materials typically have advantages such as a low pH and are suitable in arid regions that suffer from high pH; hence, their additions to the alkaline soil can overcome the alkalinity; as a result, the pH value decreases in these areas. In addition, the kaolinite-composited BC and HC materials can enhance soil health by improving the properties of soil through physical structure, increasing porosity, reducing bulk density, enhancing soil aggregation, organic contaminants degradation, water and nutrients retention, as well as climate change mitigation. | Results and discussion
Characterization
Proximate and chemical analyses
The results of the chemical and proximate analyses of the fabricated materials are presented in Table 1 . The results revealed that the HC materials (HC, HCK10, and HCK20) had a higher yield than the BC materials (BC, BCK10, and BCK20). In addition, the highest yield was found in HCK20, while the lowest yield was found in BC. On the other hand, increased yield was associated with an increasing percentage of kaolinite, which was noticed in BCK20 and HCK20 compared to pristine. The yield order was in the order: HCK20 (64.35%) ˃ HCK10 (62.19%) ˃ HC (56.74%) ˃ BCK20 (36.84%) ˃ BCK10 (32.89%) ˃ BC (24.15%). Therefore, the HC materials showed a higher yield than the BC materials. The lower yield of pristine BC materials can be attributed to the higher volatile matter loss and a higher weight loss from biomass through pyrolysis than those of HC materials, which were made through hydrothermal carbonization in an airtight container. Hence, the higher yield of kaolinite-synthesized materials indicated more thermal stability than pristine. The highest yield of kaolinite-synthesized materials could be related to the highly resistant nature of kaolinite against thermal degradation 42 . The volatiles decreased with an increasing percentage of kaolinite and carbonization. This reduction was 3.94 times for BC, 4.13 times for BCK10, and 4.07 times for BCK20 than that of BM, while 1.44 times for HC, 1.67 times for HCK10, and 2.02 times for HCK20 than that of BM. Likewise, the fixed C was higher in pristine BC and HC as compared to BC and HC-based materials. The highest fixed C was observed in BC (70.20%), while the lowest was in BCK20 (25.59%) among the synthesized materials. In all synthesized materials, BC materials were more recalcitrant than HC materials 8 . The highest ash contents were found in BCK20, while the lowest were in HC. The ash contents increased with increased kaolinite percentage in synthesized BC and HC materials. The ash contents were increased by 2.43, 12.87, 18.15, 0.91, 6.35, and 9.46-folds in BC, BCK10, BCK20, HC, HCK10, and HCK20, respectively, as compared to BM, indicating the formation/condensation of the compounds of minerals in these materials through pyrolysis 51 . In comparison to HC-based materials, the increased ash contents in BC-based materials were related to the thermal oxidation of organic compounds during the pyrolysis process 8 . The BM displayed a higher moisture content of 2.73%, while it was higher in HCs (HC = 2.41%, HCK10 = 2.11%) and HCK20 = 1.46%) as compared to BCs (BC = 0.87%, BCK10 = 1.02%, and BCK20 = 0.41%). The HC materials had a higher moisture content due to being hydrothermally pretreated.
The finding displayed that the pH of BC materials elevated with an increasing kaolinite percentage during pyrolysis. In contrast, the pH of HC materials decreased with an increasing kaolinite percentage during hydrothermal carbonization. The highest pH value was 10.97 in BCK20, while the lowest was 4.48 in HCK20. The pH increased by 3.14, 4 and 5.7 units in BC, BCK10 and BCK20, respectively, compared with the BM, indicating the elimination of acidic functional groups and concentration of basic functional groups 52 . Furthermore, with increasing the pyrolysis temperature, the recalcitrant cationic species (Ca +2 , Mg +2 , Na + ) condensed in BC materials, which could also cause increased pH 53 , 54 . On the other hand, the pH values of the HCK10 and HCK20 were decreased by only 0.09 and 0.83 units, respectively, as compared with BM, indicating the minimal removal of basic functional groups 52 . The pH value of the HC increased by 0.91 units compared with the BM. Likewise, the EC of all materials decreased with pyrolysis and hydrothermal carbonization, which could be due to washing all the materials before analysis to remove surface basicity 55 , as well as the dissolved salts released into the liquid phase during hydrothermal carbonization 56 . The highest cation exchange capacity (CEC) was shown in HC (132.8 cmol kg −1 ), while the lowest CEC was in BC (16.3 cmol kg −1 ). With pyrolysis, the CEC was increased by 34.97% for BCK10 and 38.04% for BCK20 as compared to pristine BC, which is attributed to an increase in surface functional groups 57 . Contrarily, with hydrothermal carbonization, the CEC was decreased by 7.38% for HCK10 and 6.02% for HCK20 compared with HC. Nevertheless, the CEC for HC materials was several times higher than that for BC materials due to more oxygen-containing functional groups on the surface of HC materials 58 – 61 .
X-ray diffraction, SEM, and FTIR analyses
The XRD spectra of the BM, kaolinite deposits, and synthesized materials are shown in Fig. 1 . The various visible peaks on the spectra of all the synthesized materials demonstrate the presence of crystalline minerals and inorganic materials. The XRD patterns of raw materials, such as kaolinite clay deposits and BM, were identified. In the XRD pattern of kaolinite deposits, the strongest intense peaks of kaolinite in XRD pattern of kaolinite deposits were identified at 2θ = 12.46°; 25.06° and 26.68° (Fig. 1 a) 62 – 64 and lower diffraction kaolinite peaks than previous peaks were revealed at 2θ = 36.7°; 39.50°; 42.56°; 50.3°; 55.16° and 62.4° 65 – 70 . Four peaks of cellulose in BM are displayed at = 21.8; 22.4°; 24.3° and 30°, peak of carbon-containing minerals mellite is at 2θ = 14.8°, and calcite at 2θ = 39.78° (Fig. 1 a) 71 – 74 . The kaolinite peaks were found in BC-based materials and HC-based materials, which endorsed successfully implanted kaolinite onto the BC and HC matrix. Peak shifting is a sign of interactions between kaolinite and BC or HC during synthesizing the composite. Peaks of kaolinite at 2θ = 20.7°; 26.68°; 39.5° and 50.3° in BCK10 and BCK20 were shifted to 20.78°; 26.6°; 39.26° and 50°, respectively, and 2θ = 25.06° in BCK10 was shifted to 25.2°, and 2θ = 36.86° in BCK20 was shifted to 36.48° (Fig. 1 b) 64 , 66 , 68 , 75 . Similarly, peaks of kaolinite at 2θ = 12.46°; 26.68°; 36.1°; 38° and 42.56° in HCK10 and HCK20 were shifted to 12.26°; 26.62°; 36.14°; 38.32° and 42.46°, respectively, and 2θ = 39.5° in HCK10 was shifted to 39.44°, and 2θ = 55.1° was shifted to 54.78° (Fig. 1 b) 64 , 67 , 76 , 77 . The other peaks in BC-based materials were impurities corresponding to calcite and quartz; and cellulose and mellite for HC-based materials. The XRD analysis of BC (Fig. 1 b) displayed peaks at 2θ = 23.04°, which indicated the presence of graphite 78 . Other peaks were identified at 2θ = 29.3°; 39.44°; 43.1°; 47.34° and 48.28°, and indicated the presence of calcite 52 , 79 , 80 . Likewise, the peak of cellulose in HC is at 2θ = 22.4° 72 and the peak of mellite is at 2θ = 14.8° (Fig. 1 c). In BC-based materials, mellite was lost during the pyrolysis process of BM, while it still exists with HCs. Therefore, the changes in the results of the kaolinite-synthesized BC and kaolinite-synthesized HC showed that the synthesis method used successfully implanted kaolinite onto the BC and HC matrix.
SEM images in Fig. 2 depict the surface morphology of the synthesized materials. SEM images are highly useful to obtain minute details about the structure of synthesized materials and their modifications. In addition, comparing pristine BC and HC with their modified and raw materials would therefore allow us to make judgments on morphological changes during pyrolysis and hydrothermal carbonization. Pyrolysis and hydrothermal carbonization converted the crystalline surface of BM (Fig. 2 a) into porous and amorphous materials, as presented in Fig. 2 b–g. The surface area of the BC-based materials and HC-based materials was generally coated by thin film structures and were more irregular on the BC and HC surfaces (Fig. 2 c–g), indicating that after zooming in at 3000 × magnification, the kaolinite well onto the surfaces and within the pores of BC and HC 81 . The decomposition and volatilization of biomass caused a small number of pores with different sizes to appear in pristine BC and HC. In addition, the SEM images showed that kaolinite was not entirely covering the surfaces of the BC-based materials 82 , 83 , while kaolinite was entirely covering the surfaces of the HC-based materials.
According to Li et al. 84 , the surface functional groups, particularly O-containing functional groups, may facilitate the chemical adsorption of BC and HC. Figure 3 displays the FTIR spectra of synthesized materials in the range of 400–4400 cm −1 . A broad band has been explored at 3300–3800 cm −1 in BM, representing the presence of O–H bonding 85 , which continued to appear during hydrothermal carbonization and vanished during pyrolysis. The structural–functional groups found in the BC and HC materials included C=C, O–H, C–O, C=O, C–H, Si–O–Al, Si–O–Si, Si–O, N–H, C–OH, CH 2 and C–N. It is evident that some functional groups, such as C–H and C=O (between 1413 and 1462 cm −1 ) were shared by BC and HC materials 86 , C–O and O–H (1033 cm -1 ) 87 . For BC-based materials, some peaks appeared with increasing kaolinite, such as; Si–O–Si groups at 465 and 469 cm −1 88 , Si–O at 791 cm −1 89 and C–O, C–N at 1083 cm −1 90 , 91 . Therefore, those peaks indicated that kaolinite was successfully loaded onto the BC matrix 92 . The HC materials showed more bands than BC materials, which could be due to minimal losses of functional groups. Likewise, some peaks appeared with increasing kaolinite, such as Si–O–Si at 469 cm −1 85 , Si–O and Al–O vibrations at 762, 696 and 539 cm −1 93 , Si–O at 784 cm −1 94 . On the other hand, the same peaks appeared in HC, HCK10, and HCK20 composites, such as; C–O and O–H at 1033 cm −1 87 , C–O–C at 1111 cm −1 95 , CH 3 at 1440 cm −1 96 , C=O at 1510 cm −1 , C=O at 1510 cm −1 97 , COOH at 1700 cm −1 98 , C–H at 2921 cm −1 99 , and O–H (between 3300 and 3800 cm −1 ) 82 . Moreover, some bands were not found in BC and HC alone, but when composited with kaolinite, such bands were visible. Therefore, this finding can further assist in predicting the impact of kaolinite-synthesized BCs and HCs on the removal efficiency of these composites for various pollutants.
Size and surface characteristics
The BET surface area, pore size, and pore volume of the BM and synthesized materials are shown in Table 2 . BC materials have a higher surface area than HC materials. The highest surface areas were found in BC (290.89 m 2 g −1 ), and the lowest in HC (5.32 m 2 g −1 ). Moreover, it was detected that the surface areas of the BCK10 and BCK20 (225.14 and 180.40 m 2 g −1 ) were reduced compared to the pristine BC (290.89 m 2 g −1 ), indicating that the pores on the BC might have been covered/clogged by kaolinite 39 . Conversely, the pristine HC suffered agglomeration and revealed a lower surface area 97 , 100 . Therefore, the surface area of the HCK10 and HCK20 was higher than that of HC. The surface area of the HCK10 and HCK20 was 16.11 and 15.44 m 2 g −1 , which was threefold higher than the pristine HC (5.32 m 2 g −1 ). The composite interposes retained the kaolinite particles, increasing the surface area of kaolinite-HC 43 , 101 . Contrarily, the BCK10 and BCK20 pore sizes were 38.31 and 28.17 Å, which were higher as compared to BC (28.81Å). On the contrary, the pore sizes of HCK10 and HCK20 were 145.90 and 175.92 Å, which were lower as compared to HC (187.71 Å). The highest pore sizes was appeared in HC (187.71 Å), while the lowest was in BCK20 (28.17 Å). However, with increasing the amount of kaolinite deposit, the surface area decreased by 22.68% in BCK10 and 38% in BCK20 compared with pristine BC, while kaolinite addition increased the surface area of HCK10 by 203% and HCK20 by 190% as compared to pristine HC. Overall, the pore size of the HC materials was more than 10 times larger than that of the BC materials. The previous studies showed that the low surface area of zeolite and silica-composited BC is due to plugging of pores in the existence of minerals 51 . Yao et al. 35 also mentioned that the blockage of BC pores by clay minerals particles could be the cause of the decreased surface area of clay biochar composites. The hydrodynamic size of the particles of the synthesized materials in aqueous suspensions was determined (Table 2 ). Any particle aggregate or particle with an equivalent diameter is seen by Dynamic Light Scattering (DLS) to reflect a similar size. The average size of particles (hydrodynamic size) was 2.63 μm for BC, 2.34 μm for BCK10, 2.73 μm for BCK20, 2.19 μm for HC, 2.38 μm for HCK10, and 3.10 μm for HCK20. These results of particle size analyses suggested that HC showed the minimum particle size, while HCK20 showed the maximum particle size.
Colloidal dispersions have an electrokinetic potential known as zeta potential, and the value of this potential is influenced by the surface charge of the individual particles. In our study, the zeta potential values of the synthesized materials were determined as a function of the solution pH (Table 2 ). The zeta potential of the pristine BC and BC-based materials ranged from − 25.06 to − 25.08 mV; however, the zeta potential of the pristine HC and HC-based materials ranged from − 21.68 to − 24.98 mV. The highest zeta potential was shown in BCK10 (− 25.08 mV), while the lowest was in HCK20 (− 21.86 mV). Consequently, the negative charge of pristine BC and BC-based materials is slightly higher than that of pristine HC and HC-based materials. Hence, the zeta potential of all the synthesized materials was negative, indicating that all surfaces of the synthesized materials are negatively charged. Therefore, a larger zeta potential means a larger negative charge, which is beneficial for remediation, especially with removing cationic ions.
Elemental composition and carbon stability
The elemental composition of the synthesized materials is presented in Table 3 . In comparison to BM, thermal treatment enhanced the total C contents of BC and HC materials. An increased degree of carbonization may be the cause of rising C contents with pyrolysis and hydrothermal carbonization. The higher contents of C were observed in BC, and the lowest was observed in HCK20 among the synthesized materials. HC materials had higher H contents ranging from 5.33 to 5.54%, while BC materials ranged from 1.08 to 1.41%. Among the synthesized materials, the maximum N contents were in BCK20, while the minimum was in BC. However, the C contents of the BC, BCK10, BCK20, HC, HCK10, and HCK20 were increased by 47.8%, 44.1%, 43.6%, 18.3, 20.9%, and 17.8%, respectively, as compared to BM. The C content was higher in pristine BC, which decreased with kaolinite modification in BC-based materials, such as 6.59% in BCK10 and 7.32% in BCK20. Meanwhile, the C content was increased in HCK10 (3.37%) and slightly decreased in HCK20 (0.52%) compared to HC. The increasing lignin content in biomass has been mentioned to enhance carbonization and increase the content of C in BC 102 , 103 , 105 . Other studies reported that cellulose and hemicelluloses also significantly affect on the C content 104 , 105 . On the other hand, a reduction of H contents in BC materials was more than that of HC materials. Compared to BM, the reduction of H contents for BC, BCK10, BCK20, HC, HCK10, and HCK20 was 76.3%, 78.4%, 82.0%, 10.8%, 7.3%, and 9.9%, respectively. According to previous studies, Gai et al. 106 reported that the decrease in H content of BC was due to the loss of water, gaseous H 2 , hydrocarbons, and tarry vapors. The total N contents decreased with pyrolysis and hydrothermal carbonization in BC (27.3%), HC (3.1%), and HCK10 (11.9%), while it increased in BCK10 (21.5%), BCK20 (32.6%), and HCK20 (18.9%) compared to BM. Furthermore, the contents of N increased with increased kaolinite modification of BC and HC, which were higher in BCK20 and HCK20. The total O contents decreased with pyrolysis and hydrothermal carbonization of biomass in all materials. The reduction in O was observed in BC (97.1%), BCK10 (92.5%), BCK20 (94.5%), HC (27.1%), HCK10 (30.8%), and HCK20 (29.6%), compared to BM. Moreover, BC-based materials showed higher O contents compared to pristine BC, while HC-based materials slightly presented lower O contents compared to pristine HC. Dehydration, volatilization, and depolymerization could be the causes of the decrease in O contents with hydrothermal carbonization and pyrolysis 107 .
Figure 4 illustrates the Van Krevelen diagram, frequently used to calculate the molar O/C and H/C ratios to compute the BC and HC material’s recalcitrance. With pyrolysis and thermal carbonization, conocarpus waste biomass is dehydrated and depolymerized to produce smaller dissociation products 24 , 108 . Reduced H/C and O/C molar ratios showed reduced polarity and a greater degree of aromaticity, which in turn increased the stability of BC 55 . Also, another research mentioned that the higher stability of BC composites is attributed to greater polyaromatic carbon content compared to HC composites 109 . From Table 3 and Fig. 4 , the surface polarity index (i.e., indicated by O/C molar ratio) of BC and HC materials decreased with pyrolysis and hydrothermal carbonization compared with BM, indicating a decrease in the hydrophobicity of these materials. Overall, the highest values of O/C and H/C (0.604 and 1.47) were found for the BM. Among the synthesized materials, maximum values of O/C and H/C (0.36 and 1.09) were expressed by HCK20 and HC, respectively. In any case, BC-based materials showed slightly decreased H/C values and increased O/C values compared to pristine BC. In contrast to pristine HC, HC-based materials showed slightly increased H/C values and decreased O/C values. The O/C ratio of BC (0.01), BCK10 (0.025), and BCK20 (0.02) was less than 0.2, suggesting more stability of such materials and a half-life of more than 1000 years 110 . In contrast, the O/C ratio of HC (0.360), HCK10 (0.331), and HCK20 (0.350) was between 0.2 and 0.6, suggesting a half-life ranging from 100 to 1000 years 110 . For BM, the O/C ratio was 0.604, which will probably possess a half-life of less than 100 years 110 ). Likewise, low H/C values of BC and HC materials compared with BM indicated high aromaticity and reactivity 111 . In addition, the much lower H/C ratio of BC materials compared to HC materials demonstrated that BC materials were heavily carbonized and showed highly aromatic structures. Depending on IBI 112 criteria, the H/C < 0.7 showed greater combined aromatic ring structures for the BC materials, while the H/C molar ratio of HC materials is higher than that of BC materials, which was ˃ 0.7. Therefore, BC materials showed higher aromaticity and low polarity than that of HC materials. Hence, the ratios of H/C and O/C in the BC-based materials and HC-based materials indicated more aromatic C and becoming less hydrophilic 113 .
TGA and recalcitrance index (R 50 )
In this study, TGA thermogravimetric analysis was applied to investigate the long-term stability of synthesized materials. The results of TGA-DTG analysis are shown in Fig. 5 a,b. Our results indicated that the BM thermally decomposed earlier than the HC and BC materials due to thermal instability. The thermogravimetric curves of the different BC materials and the HC materials exposed similar performances as regards weight loss (%) on a reducing trend with rising temperature. For BC, BCK10, and BCK20, the sudden weight loss began at 650 °C ≈ 700 °C, at 350 °C ≈ 400 °C for HC, HCK10, and HCK20, and at 250 °C ≈ 350 °C for BM. The thermograms displayed two general regions where weight loss has occurred: (i) around 300°C for BM and HC materials because of thermal degradation of cellulose and hemicellulose compounds 114 and (ii) around 600–1000 °C because of lignin degradation 51 . The order of materials for degradability is as follows: BCK20 < BCK10 < BC < HCK20 < HCK10 < HC < BM. Therefore, the BC and HC with a 20% kaolinite ratio were more stable, followed by a 10% kaolinite ratio, followed by pristine. Moreover, a higher weight loss was observed in HC materials Than in BC materials.
The recalcitrance of the synthesized materials in soil depends on the potential of BC and HC materials to resist thermal, physical, and chemical degradation. The formation of organometallic complexes and the aromatic carbon structure play significant roles in BC and HC materials. Due to its increased aromaticity, the C of BC materials has a higher potential for recalcitrance than that of HC materials and BM; however, the recalcitrance and stability of C in the soil after adding BC and HC materials vary depending on the kind and feedstock composition, the texture and structure of the soil, and other environmental factors. Hence, the recalcitrance index of BC and HC materials is required to be determined in relative to graphite, because one of the most stable C forms is graphite 115 . As a result, TGA thermograms that are marked as R 50 were used by Harvey et al. 47 to create a novel recalcitrance index that predicts the materials' potential for recalcitrance.
To calculate the recalcitrant index (R 50 ), TGA thermograms for all materials were corrected for moisture and ash contents, which reflect the stability of synthesized materials and the extent of C sequestration 116 . Figure 5 c,d display moisture and ash-corrected TGA thermograms. Therefore, the R 50 values of BM and HC in this study were 0.40 and 0.42, indicating that these are highly biodegradable and belong to class 3 (Table 3 ). BC, BCK10, and BCK20 were considered in class 1 with R 50 of 0.78, 0.81, and 0.79, respectively, indicating that these have high recalcitrance potential 47 . The R 50 values of HCK10 and HCK20 composites were 0.50 and 0.51, suggesting they were minimally degradable. Increased R 50 in the BCK10 and BCK20 composites could be attributed to kaolinite's presence, which improved BC's oxidation resistance. The previous study by Ahmad et al. 51 demonstrated that the higher R 50 in silica BC composites could be related to the probable protection of C by silica particles, which may be controlled through the pyrolysis process. The reason was also attributed to the change from the aromatic C–C/C=C functionality to the C–O and C–H configuration of the BC surface forming stable organo-mineral complexes (i.e., C–O–Al and C–O–Si) with clay minerals 117 . The previous study by Wang et al. 115 showed that the R 50 increased by 0.89 for kaolinite-composited BC, reflecting that minerals-composited BC could improve the thermal stability of BC. According to Ahmad et al. 111 , changes in structural arrangements within the silica-BC complex may be responsible for the high recalcitrance and C sequestration potential of silica-composited BC. Therefore, increased R 50 values of BC materials expect greater recalcitrance and stability because of the particular interactions of kaolinite with the BC matrix.
Although it provides a range of recalcitrance relevant to graphite (a highly stable C form), the recalcitrance index (R 50 ) does not provide information on the specific timing for C sequestration. Consequently, the potential of C sequestration (CS) of BC and HC materials has been calculated using R 50 , yield, and C contents. A higher CS percentage relies on (i) the yield (%), (ii) the contents of C (%) before and after pyrolysis and hydrothermal carbonization, and (iii) the R 50 value. In our study, the CS of the BC materials ranged from 46.43% to 51.63%, while the HC materials showed CS in the range of 29.16%–39.94%. The highest CS value was found in BCK20 (51.63), while the lowest was in HC. On the other hand, increased CS values were associated with an increasing percentage of kaolinite in BC and HC, which were 51.63% in BCK20 and 39.95% in HCK20 compared to pristine. Ahmad et al. 111 revealed that the silica presence in BC-based materials enhanced the C sequestration potential through modifications to the silica-BC complex's structural arrangements. Another study by Sewu et al. 118 indicated that the bentonite-BC materials had higher C sequestration potential than the original BC. Therefore, in our study, forming Si–O–C and Al–O–C in BC-based materials and HC-based materials possessed the highest C sequestration potential.
Toxicity evaluation
Contents of PAHs
Biomass combustion is one of the primary sources of anthropogenic PAHs 119 . The total PAHs quantities and proportional contributions of individual PAHs provide valuable data regarding the quality of synthesized BC and HC materials regarding environmental safety. The contents of PAHs in BC and HC materials were determined to identify the PAHs retention in all materials due to various processes of pyrolysis and hydrothermal carbonization, and the findings are presented in Table 4 . Pristine HC showed higher amounts of all PAHs than in pristine BC. Acenaphthene content was found to be around 2.6 times greater in pristine HC than pristine BC. Moreover, around 2 times greater phenanthrene, anthracene, and benzo [e] pyrene [BeP] contents, about 1.5 times greater naphthalene and acenaphthylene contents, and 1.3 times greater fluorine, fluoranthene, pyrene, and retene contents were detected in pristine HC in comparison to pristine BC. These findings support previous studies showing that pristine HC contains more PAHs than pristine BC 8 , 120 . Compared to dry pyrolysis, the tar condensation on the surface of HC through hydrothermal carbonization of biomass might have resulted in the retention of PAHs in such materials 58 . In contrast, the modification of BC and HC with kaolinite decreased the PAHs contents as compared to pristine BC and HC. The HC-based materials such as HCK10 and HCK20 showed a greater reduction of PAHs contents than the BC-based materials such as BCK10 and BCK20. Similarly, increasing kaolinite modification means more reduction of PAHs, such as in BCK20 and HCK20, which were more PAHs-reducing than BCK10 and HCK10. Generally, PAHs contents in HC-based materials were lower than in BC-based materials. Compared to pristine BC and HC, the reduction for these materials significantly ranged between 1.2 and 6.4 times for most of the PAHs contents. For instance, the reduction for phenanthrene was about 5 times for HCK10 and 6.4 times for BCK20 compared to HC. Similarly, the reduction of naphthalene was about 1.2 times for BCK10 and 2.6 times for BCK20 compared to pristine BC; likewise, about 2.8 times for HCK10 and 3.8 times for HCK20 as compared to pristine HC. Overall, ten forms of seventeen PAHs were detected in the BC and HC materials. The sum of the PAHs total concentrations of BC and HC materials ranged from 739.1 μg kg −1 in HCK20 to 2770.7 μg kg −1 in HC. The order of the ∑PAHs contents was HC (2770.7 μg kg −1 ) ˃ BC (1752.9 μg kg −1 ) ˃ BCK10 (1514.1 μg kg −1 ) ˃ HCK10 (972.8 μg kg −1 ) ˃ HCK20 (739.1 μg kg −1 ) ˃ BCK20 (934.4 μg kg −1 ). Nevertheless, ∑ 15 PAHs contents (∑16 US-EPA PAHs, except Benzo[b] Flouranthene) were in the BC (1487.8 μg kg −1 ), BCK10 (1293.3 μg kg −1 ), BCK20 (799.5 μg kg −1 ), HC (2383.2 μg kg −1 ), HCK10 (861.71 μg kg −1 ), and HCK20 (619.5 μg kg −1 ). Therefore, ∑15 PAHs contents in BC and HC materials were lower than the threshold value of ∑16 PAH (6000–20,000 μg kg −1 = 6000 μg kg −1 ) provided by the IBI 112 . Therefore, the PAHs compounds in BC and HC materials are considered safe for use as soil amendments and have the lowest potential risk for PAHs-related effects.
Based on the aromatic rings number, PAHs with 3 rings were predominant in BC and HC materials, followed by those with 4, 2, and 5 rings (Fig. 6 ). Large proportion of low-molecular-weight 3 ring PAHs in BC and HC materials may be because of their atmospheric emission during pyrolysis and retention in the condensed tar through hydrothermal carbonization. Comparatively, pristine BC contained higher proportions of 2 and 4 ring PAHs than pristine HC, while pristine HC contained higher proportions of 3 and 5 ring PAHs. In BC and HC-based materials, the proportions of 2 and 5 ring PAHs were 17% and 3% in BCK10, BCK20, HCK10, and HCK20, which were equal (except 2 ring PAHs in BCK20 which, was 13%). The proportions of 3 ring PAHs were ordered as BCK20 (67%) ˃ HCK10 and HCK20 (64%) ˃ BCK10 (61%). Likewise, the proportions of 4 ring PAHs were ordered as BCK10 (19%) ˃ HCK10 and HCK20 (17%) ˃ BCK20 (16%). Generally, the proportion of 3 ring PAHs was the highest (ranging from 59 to 67%) in all synthesized materials, while the proportion of 5 ring PAHs was the lowest (ranging from 2 to 3%). During pyrolysis, they are released into the atmosphere while being retained in the condensed tar during hydrothermal carbonization. Low molecular weight 3 ring PAHs may be present in significant concentrations in HC. Previous studies mentioned that the toxicity and PAHs concentration in BC decreased significantly with increasing temperature, which can be attributed to the evaporation with rising temperature 121 – 123 . In addition, the most important factors in the elimination of PAHs compounds are volatility and thermal degradation 124 . Washing the BC and HC materials could reduce the negative impact on plant growth due to the reduced PAHs compounds 125 . Hence, kaolinite-composited BC and HC could be better options for amendments to soil and water due to decreasing PAHs contents.
Biochar (BC), BC with 10% kaolinite enrichment (BCK10), BC with 20% kaolinite enrichment (BCK20), Hydrochar (HC), HC with 10% kaolinite enrichment (HCK10), HC with 20% kaolinite enrichment (HCK20).
Maize germination and growth
The germination test results of maize ( Zea mays L.) are presented in Fig. 7 a. In control (CK) treatment, 75% germination of maize was detected. Seed germination was significantly affected by the application of BC and HC materials. The highest germination was observed in the HC (0.2 g) treatment, which was 85%, while the lowest was for HC (0.4 g) and HCK20 (0.2 g) treatments, which were 55%. In addition, there were no significant differences between 0.2 and 0.4 g applications of BC, BCK10, BCK20, and HCK10. The order of the germination percentages was as follows: HC (0.2 g) ˃ CK, BC (0.2 and 0.4 g), BCK10 (0.2 and 0.4 g), BCK20 (0.2 and 0.4 g), HCK10 (0.2 and 0.4 g), HCK20 (0.4 g) ˃ HC (0.4 g), HCK20 (0.2 g). Our finding showed that the previously mentioned synthesized materials are safe when applied to soil to enhance plant production without phytotoxic effects. Our finding supports the previous studies, which mentioned that the application of HC mixed with kaolinite could mitigate greenhouse gas emissions and might support improved soil retention of C and N for better management of agricultural nutrients 101 . Likewise, another study confirmed that applying a clay-BC composite positively impacted on the yield and quality of blue grass and improved soil properties 126 .
To evaluate the effect of various treatments, the shoot and root length of the maize seedling were also determined, and the results are shown in Fig. 7 b. Compared to CK treatment, all treatments significantly improved the maize growth (except BC, 0.2 g). Shoot length of maize seedlings for BCK20 (0.2 and 0.4 g), HC (0.2 and 0.4 g), HCK10 (0.2 g), and HCK20 (0.2 g) treatments was 54.63%, 60.62%, 65.21%, 61.55%, 53.56%, and 64.20%, respectively, higher than CK. On the other hand, other treatments such as BC (0.4 g), BCK10 (0.2 and 0.4 g), HCK10 (0.4 g), and HCK20 (0.4 g) also increased the length of the shoot of maize seedlings by 36.11%, 25.62%, 40.60%, 24.36%, and 41.18%, respectively, as compared to CK. These findings confirm the potential of the synthesized materials to improve plant growth. Likewise, HC (0.2 and 0.4 g) and HCK10 (0.2 g) significantly increased the root length of maize seedlings by 89.27%, 90.80%, and 88.29%, respectively, compared to CK. Moreover, the root length of the other treatments was lower than previous treatments but higher than CK, which were 27.16% in BC (0.2 g), 38.71% in BC (0.4 g), 62.07% in BCK10 (0.2 g), 51.08% in BCK10 (0.4 g), 41.48% in BCK20 (0.2 g), 53.16% in BCK20 (0.4 g), 51.96% in HCK10 (0.4 g), 60.31% in HCK20 (0.2 g), and 52.06% in HCK20 (0.4 g).
Figure 7 c,d depict maize seedlings’ fresh and dry weights as influenced by synthesized materials. The HC (0.2 g) treatment enhanced fresh and dry maize weights by 112.49% and 105.40%, respectively, compared to CK, while the other treatments were lower than HC (0.2 g) treatment. Hence, all the treatments significantly increased the fresh and dry weights, but in varying proportions compared to the CK treatment. Mumme et al. 127 showed that biochar-zeolite composites increased the germination rate compared to the non-amended treatments. Another study by Medha et al. 41 showed that Sorghum grass root and shoot length significantly increased than CK. Furthermore, among the tested materials, with the increase in the bentonite biochar composite and kaolinite fractions, the shoot and root length substantially increased by five and ten folds, respectively. In a study about the effect of biochar on the physiological growth of maize. Cong et al. 128 found that biochar increased the dry biomass of maize by 22.22% against the control condition while up to 58.39% increase in plant height was noted against control with no biochar amendment. In another study, Yang et al. 129 stated that kaolinite composite with walnut shell derived biochar improved its surface and adsorptive characteristics and nutrient release capacity, which ultimately positively impacted plant growth. Fregolente et al. 130 found positive impact of hydrochar application on root and shoot development and dry biomass production of maize. Therefore, the parameters of seed germination and seedling growth revealed positive impacts of the synthesized materials with varying proportions. Consequently, it is highly recommended to properly assess the BC and HC materials before applying them to the soil as amendments.
The correlation between the characteristics of synthesized materials such as R 50 , CS, O/C, CEC, SA, and ∑ total PAHs toxicity and maize germination and growth were established (Table 5 ). The results of germination and growth parameters were taken by the average of additives rate. The relationship revealed a significant correlation between maize germination and growth and the characteristics of synthesized materials (R 50 , CS, O/C, CEC, SA, and PAHs). The significant relationships (r) between germination and characteristics of synthesized materials of R 50 , O/C, CEC, and SA were 0.75, − 0.78, − 0.78, and 0.81, respectively. Likewise, the r values between shoot length and R 50 , O/C, CEC, and SA were − 0.52, 0.54, 0.55, and − 0.71, respectively. Similarly, the r values between root length and R 50 , CS, O/C, CEC, and SA were − 0.78, − 0.83, 0.76, 0.78, and − 0.80, respectively. On the other hand, the r value between fresh weight and ∑ total PAHs was 0.53, while it was -0.50 between dry weight and SA. According to our findings, the characteristics of the synthesized materials have positive relationships on maize growth and germination.
Implications of the study
In recent years, fabricated BC and HC materials have received a lot of interest. Although a few studies have examined the feasibility of clay-supported BC and HC as composites in different applications, evidence from germination and growth parameters is still very limited. In our study, pristine BC, HC, and kaolinite-composited BC and HC were synthesized and characterized. The synthesized materials can be used successfully to sequester C, immobilize inorganic and organic pollutants in water and soil, increase N and phosphorus availability, and increase plant production at the field scale. Our founding materials, BC-based materials, could be used for long-term stability and sequestering C due to their high recalcitrance potential compared to HC-based materials, which had an R 50 > 0.7 and CS > 47.63%. The higher stability and C sequestration of these charred materials could help mitigate climate change. Furthermore, synthesized materials can be used successfully for water and soil remediation; BC-based materials have high surface area and zeta potential, while HC materials have high CEC and surface functional groups. Moreover, kaolinite-composited BC and HC can be used for plant production, which showed no toxicity in the germination test and the lowest potential risk for PAHs-related impacts. On the other hand, HC-based materials typically have advantages such as a low pH and are suitable in arid regions that suffer from high pH; hence, their additions to the alkaline soil can overcome the alkalinity; as a result, the pH value decreases in these areas. In addition, the kaolinite-composited BC and HC materials can enhance soil health by improving the properties of soil through physical structure, increasing porosity, reducing bulk density, enhancing soil aggregation, organic contaminants degradation, water and nutrients retention, as well as climate change mitigation. | Conclusion
Conocarpus waste-derived BC and HC were composited with kaolinite deposits and characterized for their chemical, proximate, elemental, and structural properties. Moreover, the potential toxicity of the synthesized materials was assessed for their implications as soil amendments. Analyses of the structural, morphological, and chemical properties revealed distinctions between pristine and designed BCs and HCs in terms of their properties. BC-based materials showed greater recalcitrance indices (R 50 : 0.79–0.81) and C sequestration potentials (47.63–51.63%) as compared to HC-based materials (R 50 : 0.50–0.51) and C sequestration (39.32–39.94). Kaolinite particles prevented C particles' thermal degradation, enhancing their stability. The O/C and H/C ratios were in the range of 0.02–0.025 and 0.15–0.15, respectively, in the BC-based materials, while 0.33–0.35 and 1.08–1.09, respectively, in HC-based materials, suggesting more aromaticity in BC-based materials as compared to HC-based materials. In addition, these materials can be utilized to sequester soil C pools for a longer time when used to amend the soil. BC-based materials showed a higher CEC than pristine BC, while HC-based materials showed a higher surface area than pristine HC. The total PAHs content decreased with increasing kaolinite deposits percentage in BC and HC; therefore, a greater reduction has been shown in BCK20 and HCK20 than in pristine chars. Overall, the total PAHs contents in BC and HC materials were below the USEPA's suggested limits. The removal of organic pollutants was mostly affected by the pyrolysis and hydrothermal carbonization processes; therefore, the thermal treatments can be a good way to generate chars with kaolinite deposits that have little or no organic pollutants and could be utilized as a safe soil amendment. Kaolinite-synthesized BC and HC demonstrated a positive impact on maize germination and seedling growth. In the future, kaolinite-synthesized BC and HC can be investigated for their efficacy as inexpensive soil amendments to improve soil health and plant productivity. | In this study, biochar (BC) and hydrochar (HC) composites were synthesized with natural kaolinite clay and their properties, stability, carbon (C) sequestration potential, polycyclic aromatic hydrocarbons (PAHs) toxicity, and impacts on maize germination and growth were explored. Conocarpus waste was pretreated with 0%, 10%, and 20% kaolinite and pyrolyzed to produce BCs (BC, BCK10, and BCK20, respectively), while hydrothermalized to produce HCs (HC, HCK10, and HCK20, respectively). The synthesized materials were characterized using X-ray diffraction, scanning electron microscope analyses, Fourier transform infrared, thermogravimetric analysis, surface area, proximate analyses, and chemical analysis to investigate the distinction in physiochemical and structural characteristics. The BCs showed higher C contents (85.73–92.50%) as compared to HCs (58.81–61.11%). The BCs demonstrated a higher thermal stability, aromaticity, and C sequestration potential than HCs. Kaolinite enriched-BCs showed the highest cation exchange capacity than pristine BC (34.97% higher in BCK10 and 38.04% higher in BCK20 than pristine BC), while surface area was the highest in kaolinite composited HCs (202.8% higher in HCK10 and 190.2% higher in HCK20 than pristine HC). The recalcitrance index (R 50 ) speculated a higher recalcitrance for BC, BCK10, and BCK20 (R 50 > 0.7), minimal degradability for HCK10 and HCK20 (0.5 < R 50 < 0.7), and higher degradability for biomass and HC (R 50 < 0.5). Overall, increasing the kaolinite enrichment percentage significantly enhanced the thermal stability and C sequestration potential of charred materials, which may be attributed to changes in the structural arrangements. The ∑ total PAHs concentration in the synthesized materials were below the USEPA’s suggested limits, indicating their safe use as soil amendments. Germination indices reflected positive impacts of synthesized charred materials on maize germination and growth. Therefore, we propose that kaolinite-composited BCs and HCs could be considered as efficient and cost-effective soil amendments for improving plant growth.
Subject terms | Acknowledgements
The authors extend their appreciation to the Deputyship for Research and Innovation, Ministry of Education in Saudi Arabia for funding this research work through the project No. (IFKSUOR3-603-1).
Author contributions
Conceptualization, H.A.A.-S. and A.S.A.-F.; methodology—H.A.A.-S., A.S.A.-F. and M.I.A.-W; software, M.A. and A.R.A.U.; validation, H.A.A.-S., A.R.A.U. and M.I.A.-W.; formal analysis, J.A., M.I.R. and M.A.; resources: A.S.A.-F. and M.I.A.-W.; investigation, H.A.A.-S., M.I.R. and M.A.M.; writing—H.A.A.-S.; data curation, H.A.A.-S.; writing—reviewing, editing, M.A, J.A, and M.A.M.; visualization, H.A.A.-S.; supervision, A.S.A.-F. and M.I.A.-W.; funding acquisition, A.S.A.-F.; project administration, M.I.A.-W. and A.S.A.-F. All authors have read and agreed to the published version of the manuscript.
Funding
The authors extend their appreciation to the Deputyship for Research and Innovation, Ministry of Education in Saudi Arabia for funding this research work through the Project No. (IFKSUOR3-144–2).
Data availability
The data analyzed during the current study are available from the corresponding author on reasonable request.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:56 | Sci Rep. 2024 Jan 13; 14:1259 | oa_package/e7/aa/PMC10787757.tar.gz |
|
PMC10787758 | 38218951 | Introduction
For poultry farm owners globally, detecting infertile eggs before incubation is an essential economic concern. Since the embryo at the initial stages of development is too small, fertility detection cannot be carried out by the traditional candling method. On the other hand, the hatchery statistics indicate that about 7–8% of the total number of eggs put into incubation remain unhatched, despite they should have been fertilized 1 . Therefore, commercial poultry hatcheries are incubating billions of eggs per year worldwide that are not supposed to hatch. In the long run, the main problem in the incubation of such unproductive infertile eggs is ending up the billions of eggs that could have been used for human consumption. Not only that, loss of energy due to the inefficient use of incubator space, increasing handling costs and decreasing hatchery production, and the risk of contamination of the whole eggs set by exploder eggs are considered as the other bottlenecks in incubating the non-fertile eggs 2 – 4 . Therefore, developing a non-destructive and more targeted method for early egg fertility identification, preferably before being passed for incubation, can improve the efficiency of the hatchery industry and promise huge economic returns.
Different methods have been introduced so far to separate unfertilized eggs, most of which are applicable in 2–5 days after incubation 3 . Conventional candling is the most popular method to assess flock fertility, usually performed 5–10 days after incubation. This method is not only slow and cumbersome but also approximately 5% of the entire egg set is randomly investigated while the remaining 95% have the chance of infertility 4 . Other approaches were presented to indirectly monitor the fertility of chicken eggs. In the best conditions, acceptable results for infertility detection were obtained by machine vision after 3 days 5 , thermal imaging after 14 days 6 , temperature sensors after 21 days 7 of incubation, and visible and short-wavelength near-infrared (Vis-SWNIR) transmittance spectroscopy 8 and NIR hyperspectral imaging in the day before incubation or day 0 2 .
Among these methods, the hyperspectral imaging (HSI) technique has recently been applied to detect egg fertility and embryo development 1 , 2 , 9 . In general, hyperspectral imaging is a type of spectral imaging that combines spectral data from a part of the electromagnetic spectrum with spatial information from a targeted sample. The main purpose of spectral imaging is to obtain spectral content or signature for every single pixel of the image 10 . The spectral signature is unique to different materials, such as each human fingerprint, and as a result, by obtaining this signature, it is possible to identify the amount and spatial distribution of different materials 11 .
More specifically, hyperspectral transmission imaging in the spectral range of Vis-SWNIR has been used to detect developing eggs with accuracies of 96%, 92%, 100%, and 100% on days 0, 1, 2, and 3 after incubation, respectively 12 . Afterward, in addition to recognizing the infertile eggs after starting the incubation process, Smith, et al. 13 tried to detect the infertile eggs on day 0 before incubation. The results presented overall accuracy as 71% on day 0 before incubation, 63% on day 1, 65% on day 2, and 83% on day 3 after incubation. Zhihui, et al. 14 utilized the entire spectral data within the wavelength range of 400–760 nm to detect fertile hatching eggs before incubation, achieving the highest accuracy of 93% through the application of a support vector machine classifier.
The most notable breakthrough in unraveling this problem was made by Liu and Ngadi 2 who reported the detection of infertile eggs on day 0 before incubation, however, thanks to a relatively more expensive HSI system with the NIR spectral range (900–1700 nm). The best overall classification accuracies obtained were 100% on day 0, 79% on day 1, 74% on day 2, 82% on day 3, and 84% on day 4.
Despite the ability to detect fertility and monitor embryo development, no promising study, to the best of our knowledge, has been found on extracting effective wavelengths using the HSI systems for diagnosing chicken egg fertility on day 0 before incubation.
After collecting the hyperspectral images, it is necessary to extract the desired spatial and spectral features from the images. However, due to the small size of the egg embryo on day 0 and limitations in spatial resolution of hyperspectral cameras, it is difficult to visualize the embryo in hyperspectral images. Therefore, it seems that the spectral analysis in hyperspectral images is superior to spatial analysis, especially when other studies confirmed that the transmission spectra are affected by the presence of the embryo in the egg, causing some absorptions 3 , 15 .
In spectral analysis, choosing effective wavelengths is a critical step. Opting for the right wavelengths can decrease the dimensionality and complexity of data, ultimately enhancing the predictive capability of the model. The use of effective wavelengths not only reduces analysis time but is also advantageous for implementing the model in an online multispectral imaging system 16 . Moreover, given the vast amount of information in hyperspectral images, it becomes crucial to select the most appropriate classification method for discriminating the desired classes.
Different classification and wavelength selection approaches have been reported in the literature related to spectral analysis and they resulted in various prediction powers. For example, principal components analysis (PCA) in fertility detection by HSI 1 , 2 , naive Bayes classifier 3 , and linear discriminant analysis and support vector machine 15 in fertility detection of eggs using Vis-SWNIR transmittance spectroscopy, and receiver operating characteristic (ROC) analysis in the wavelength difference of reflectance spectra in bruise detection in apple 16 , were examined.
In this paper, soft independent modeling of class analogy (SIMCA) was first utilized to investigate the feasibility of Vis-SWNIR spectral data based on the HSI technique for detecting non-fertile eggs. Afterward, the wavelength variables with strong discriminatory powers were retained, while the weaker wavelengths were excluded from the data set. The ability of the selected wavebands was checked after importing the raw and processed spectral data to the nonlinear artificial neural network (ANN) classifier.
The objectives of this study were (1) to evaluate the feasibility of hyperspectral transmission imaging in the spectral range of Vis-SWNIR to detect the non-fertile eggs before starting the incubation process, and (2) to determine and compare the potential of different machine learning tools in classifying the fertile and infertile eggs, and (3) selecting the most informative wavebands to develop the predictive models for detecting fertility of unincubated chicken egg. Enhancing the accuracy of fertility detection before the incubation process, through the use of a more cost-effective hyperspectral camera, the selection of the most informative wavelengths and development of the predictive model based on the discrete number of wavelengths, and the comparison of the performance of various machine learning techniques, can be considered the novel aspects of the current study. | Materials and methods
Sample preparation
A total of 227 clean, white-shell, fresh, unwashed eggs were prepared including 131 fertile and 96 infertile eggs. The eggs were collected from a flock of 60 Leghorn laying breeder hens (Hy-Line W-36). The birds were purchased from a commercial laying breeder farm and kept in the poultry farm belonging to the Isfahan University of Technology. Then, the hens were randomly distributed in two sub-flocks of 30 laying breeder hens in each. While the first sub-flock was raised without the rooster to produce non-fertile eggs, 4 roosters were added to the second sub-flock to create as much fertility as possible. All the sub-flocks were fed a similar and standard diet, therefore, the collected fertile and infertile eggs were prepared with similar conditions in terms of hen age, feeding program, and management. These conditions could be helpful to minimize errors and make the presence of the embryo the most important factor in providing the differences between the two groups of samples. In other words, the effect of other influential factors such as variations in egg-shell thickness and internal composition arising from the hen age and diet was diminished remarkably.
The egg samples were collected daily from both sub-flocks, then numbered and weighed and their dimensions were measured. After acquiring the hyperspectral images, the eggs were immediately incubated in a commercial incubator (Jam Toyor-504, Iran) under the standard conditions (temperature of 37.5 °C and relative humidity of 60%, turned every hour) 2 . After 5 days of incubation, eggs were candled and broken out to determine their fertility status. The infertile eggs were distinguished and related hyperspectral data were transferred to the proper sample group.
Hyperspectral imaging system
A line scanning visible and near-infrared (Vis–NIR) HSI system (model V1001, OPTC, Iran) was utilized to acquire spectral images in the full-transmittance mode in the range of 400–1000 nm and average optical spectral resolution of 2 nm (Fig. 1 ). The mirror of HSI camera was connected to a stepper motor that could take both spectral and spatial information of the illuminated egg without moving the sample (Fig. 1 b). The exposure time was set at 100 ms after trial and error to reach the maximum possible signal-to-noise in the spectral data. To cover the entire width of the egg and achieve maximum spatial resolution, the number of scans was set at 400 with a distance of 100 cm between the camera and the sample. The HSI set-up was covered with a polyurethane cover to prevent ambient light from entering the imaging chamber (Fig. 1 a).
The egg sample was vertically placed in a 5 cm diameter hole drilled in a wooden board between the camera and the light sources (Fig. 1 c). By this arrangement, transmitted light passed through the egg could enter the camera. The light sources comprised one 150W lamp (at the center) and six 50W lamps (arranged around the center) of tungsten halogen, with color temperatures of 3270 K and 3200 K, respectively. These lamps were positioned beneath the sample, as shown in Fig. 1 b. Two fans were used in the light exposure chamber with the direction of blowing from outside to inside (Fig. 1 c). In addition to cooling the light sources and preventing the overheating of the egg samples, the blowing of warm air under the eggs could provide the heat required for the embryo’s survival while capturing the hyperspectral images.
The HSI images of samples were originally saved in the raw format, comprising 1279 images in λ direction as spectral dimension and 400 × 400 two spatial dimensions. Therefore, the original output hypercube had dimensions of 400 rows × 400 columns × 1279 bands. However, due to the absorption of eggshell 8 and the decrease in the intensity of light sources in wavelengths below 500 nm, as well as the low signal-to-noise ratio of the camera’s detector in wavelengths above 950 nm, the transmission spectral data from 500–950 nm was utilized for predicting the fertility of chicken eggs before incubation. To enhance the signal-to-noise ratio, the averaging method was applied in the selected wavelength region, related to the dispersion of wavelength isolator in different wavelengths. This resulted in the final hypercube of 400 rows × 400 columns × 256 bands.
For spectral calibration, a reference image was captured by allowing light to pass through the camera lens while adjusting the exposure time to the minimum 2 . The dark image was obtained with an exposure time of 100 ms by turning off the light sources and covering the camera with its cap.
Image processing and spectra extraction
In dealing with fertilized eggs on day 0, it should be noted that the embryo has not developed enough to be easily recognized without the egg breaking. The blastoderm, a single layer of embryonic epithelial tissue, in a fertilized egg is a symmetrical circular ring with approximately 3–4 mm in diameter, while the blastodisc in unfertilized eggs can be seen as an asymmetrical solid spot with a smaller diameter of about 2.5 mm. Furthermore, blastoderm in fertile eggs has a lower density of Area Pellucida (AP) and a higher density of Area Opaca (AO) regions around its perimeter 4 . In addition to changes in the density of AP and AO, molecular alterations also occur in fertilized eggs, offering an alternative indirect method for detecting unincubated fertile and infertile eggs. For instance, Padliya, et al. 17 observed that 9 proteins increased in abundance in fertilized egg yolk compared to unfertilized egg yolk, while 9 proteins decreased in abundance in fertilized egg yolk relative to unfertilized egg yolk. Qiu, et al. 18 and Qiu, et al. 19 investigated alterations in egg white protein and the original albumen differences, revealing more than tenfold differences (P < 0.01) in abundance between fresh unfertilized and fertilized chicken egg whites. Whereas the small variation in size of blastoderm and blastodisc along with limitations in spatial resolution of hyperspectral images confine spatial analysis of fertilized unincubated eggs, the changes in the density of AP and AO and the protein content of egg yolk can make the spectral analysis a more adequate approach toward differentiating between fertilized and unfertilized eggs before incubation.
To speed up spectral processing and eliminate irrelevant information, the region of interest (ROI) of hyperspectral images should be confined to the egg region. Based on studies that emphasize the molecular differences between fertilized and unfertilized eggs, particularly the changes in protein content in yolk 17 and albumen 18 , 19 , this research utilized the entire egg area as the ROI for subsequent image processing analysis. For this purpose, the best spectral image with the most contrast between the egg and the background was selected by visual investigation. This image was found around the wavelength of 630 nm (Fig. 2 a) and was then used to perform background removal operations. The edge detection method was applied to segment the image into the egg and background and the binary image was produced (Zero code for the background and code one for the egg) (Fig. 2 b). By performing the logical multiplication operation of the binary image on the other wavelength images, the hyperspectral image without background was obtained. This operation was performed on all acquired hyperspectral images.
In the next step, the average intensity of spectral data obtained from each image ( T s ) was calculated from the processed images. The relative transmittance spectrum of the samples ( T rel ) was then calculated to eliminate the interference by the optical system. This operation was done after collecting the dark and reference images. The intensity of dark ( T d ) and reference ( T r ) spectra were calculated by averaging the spectral data in the related images. Then, Eq. ( 1 ) was used to calculate the relative spectrum of the samples: where e s and e r are the exposure times of the camera while capturing the sample and reference images, respectively. Image processing and feature extraction approaches were all carried out using “MATLAB” V2014a (The MathWorks, Natick, USA).
Spectral data analysis
The proposed framework for detecting the fertile and infertile eggs and selecting the most informative wavelength variables is shown in Fig. 3 . After extracting the sample spectra, spectral preprocessing was performed to eliminate any undesired and irrelevant information due to the various factors such as light scattering effects and detector anomalies and highlight the differences between spectra for further analysis. The Savitsky-Golay smoothing method was first used to remove noise from the spectroscopy machine. In addition, the intensity of the radiation had a great effect on hyperspectral images. Therefore, methods such as standard normal variate (SNV) and normalization were used to minimize the effect of light intensity 20 . Other pretreatments such as baseline correction and multiplicative scatter correction (MSC) were also implemented to resolve the baseline changes and additive and/or multiplicative effects in spectral data, respectively. Finally, Savitzky-Golay first and second derivatives were used to correct the overlapping peaks and baseline variations, leading to broad bands being suppressed to sharp bands 21 .
Moreover, the principal components analysis (PCA) was used to review the spectral data and remove the outliers. The samples positioned outside the Hoteling T 2 ellipse (with the 95% confidential level) were recognized as potential outliers and eliminated from the data set. Then, the data set was randomly disparted into two subsets, containing the calibration subset (80% of total samples) used for building the models, and an independent test subset (20% of remaining samples) applied to assess the ability of the models in prediction.
Generally, model building and feature selection approaches carried out in this study comprised two main steps. In the first step, the ability of different linear and nonlinear classifiers to discriminate the fertile and infertile classes was evaluated using all of the wavelength variables in the Vis-SWNIR region. In the next step, the most effective wavelength variables were selected by analysis of classifiers to reach a discrete number of wavelengths, suitable for multispectral imaging systems (Fig. 3 ).
Classification by whole spectral data
The machine learning tools used in this study included soft independent modeling of class analogy (SIMCA), linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and nonlinear artificial neural networks (ANN). These approaches are described briefly below, with key references cited for more details.
SIMCA is a supervised classifier that describes each class individually in PCA sub-models. The constructed sub-models are then used to evaluate the new sample’s belonging to the class. The result of SIMCA analysis is also very useful for evaluating the discrimination power of each wavelength variable. The discrimination power plot obtained from SIMCA analysis shows us which wavelengths are most effective for discerning between two classes. The variables that resulted in a discrimination power of more than 3 can be regarded as useful in distinguishing between two classes 22 .
LDA is another supervised pattern recognition method that is used to find the linear combination of features to separate two or more classes. It describes a linear separating hyperplane by calculating the linear discriminant functions. The linear functions are used to determine to which class an unknown sample belongs. In this method, the maximum ratio of the variance between the classes to the variance within the class is obtained in the direction of the normal vector of the separating hyperplane 23 .
QDA is a more general version of the LDA that classifies observations from different multivariate normal populations. While LDA calculates the Mahalanobis distance from a common covariance matrix considered for all classes, QDA derives the Mahalanobis distance from class-specific covariance matrices 24 . The QDA models used quadric surfaces to separate two or more classes based on Gaussian distribution. Then it used posterior distributions to guess the class for a given test sample. The Gaussian parameters are estimated based on maximum likelihood 25 . QDA is especially proposed when there is any conjecture about the gross violation from the homogeneity of within-classes variance.
In ANN analysis, multi-layer, feed-forward networks with the back-propagation (BP) learning algorithm were used for detecting egg fertility. The constructed ANN consisted of one input layer to transfer the spectral data processed by the best pretreatment into the network, one hidden layer, and one output layer with two nodes for fertile and infertile classes. The optimum number of nods in the hidden layer (NHL) was achieved after a trial and error procedure. For each number of NHL, three networks were developed and their average accuracy was calculated. The plot of average accuracy versus the number of NHL was used to find the optimum NHL and hence the best topology of networks.
After finding the best ANN model based on total spectral data, sensitivity analysis was performed to identify the most important input variables or wavelengths in explaining variances in the model output. Generally, for a trained network with specific parameters for each input variable, sensitivity analysis determines the effect of varying the parameters of the network for each variable on the overall network fit 26 .
The statistical parameters used to evaluate the classifiers comprised sensitivity (Sen.), specificity (Spe.), precision (Precis.), and accuracy (Acc.) defined by Eqs. ( 2 ) to ( 5 ), respectively 27 : where TP (or true positive prediction) and TN (or true negative prediction) are the numbers of fertile and infertile eggs, respectively, that were correctly classified, and FP (or false positive prediction) and FN (or false negative prediction) are the numbers of infertile and fertile eggs that were misclassified as fertile and infertile, respectively.
Outlier detection, spectral pre-processing, PCA, and SIMCA, LDA, and QDA classifications were all performed using a statistical software package of ‘The Unscrambler’ V10.4 (CAMO AS, Trondheim, Norway). While, the ANN analysis was carried out by the neural networks package of ‘STATISTICA’ V12 (StatSoft, Inc., CA, USA).
Selecting the effective wavebands
Due to the great number of spectral bands in the hyperspectral images, the analysis of data is relatively slow. Therefore, multispectral imaging systems are suggested that work based on a certain number of wavelengths. These wavelengths should be the most informative ones, determined by analysis of hyperspectral images and successful predictive models.
Following the acceptable results of the SIMCA classifier accomplished by 1st derivative pretreatment, in the next step, the discrimination power plot was used to select the most influential wavebands. The effectiveness of selected wavebands in detecting fertility was evaluated by employing them as input for the ANN models. However, since these regions were discontinuous, the 1st derivative operation could not be carried out via common algorithms. Therefore, two approaches were selected and tested to overcome this problem. First, the raw spectral data of interested wavebands were used as the input to the ANN classifier, assuming that the preprocessing operation can be done by adjusting the weights in the nonlinear ANN method. In the second approach, the spectral difference between all possible pairs of wavelengths in the obtained effective wavebands was calculated to substitute the 1st derivative of transmission spectra 16 . The transmission difference values ( T (λ 1 ) − T (λ 2 )) were then used as the input of the ANN. Finally, the performance of constructed ANNs was compared and the best classifier based on the discrete number of wavelengths was presented.
Ethics approval
Animal experiments were conducted in accordance with the Guiding Principles for the Care and Use of Research Animals at Isfahan University of Technology. The protocol and methods received approval from the Committee on Animal Experiment Ethics at Isfahan University of Technology (No. 390132). The study adhered to the ARRIVE guidelines for reporting animal research, experimental design, and data reporting 28 . | Results and discussion
Overview of spectral data
The raw and 1st derivative of average transmission spectra of fertile and infertile eggs are shown in Fig. 4 . As shown, the fertile samples had higher raw transmission values than the infertile ones in all spectral regions (Fig. 4 a). Differences in chemical substances, such as protein content 18 , 19 , and physical properties, like the egg shape index 8 , between fertile and infertile chicken eggs could lead to variations in the transmission spectral behavior. Similar transmission values and trends in the spectra of fertile and infertile eggs were observed in the studies performed by conventional transmission spectroscopy systems 8 , 15 , 29 . However, more absorption bands appeared in transmission spectra obtained by the HSI system. As shown in Fig. 4 , in addition to broad absorption bands, significant absorptions (valleys in transmission spectra) were revealed in the spectra of fertile and infertile eggs around the wavelengths of 580, 634, 665, 730, 770, 830, and 930 nm. Since the raw spectra had relatively broad absorption bands, to improve the quality of presentation, the 1st derivative spectra were calculated (Fig. 4 b).
As shown in Fig. 4 , extensive absorption bands in the visible regions were observed around 580, 634, and 665 nm wavelengths. Considering the white shell of eggs used in this study, these absorptions cannot be attributed to the shell color pigment of protoporphyrin which produces relatively strong absorption in brown-shelled egg spectra in the mentioned wavelengths 15 , 29 . Therefore, these broad absorptions were likely due to the color pigments in the yolk such as carotenoids which are a fat-soluble group of yellow (580 nm), orange (634 nm), and red (665 nm) pigments 30 .
The wavebands around 730 nm are related to the third overtone of O–H in water 31 and the carbohydrates of eggs. Since the embryo consumes carbohydrates, proteins, and fats 32 , the higher transmission value of fertile samples around 730 nm is likely due to the relatively lower carbohydrate, protein, and fat contents of fertile samples with respect to the infertile ones (Fig. 4 a).
The relatively similar internal conditions and common compounds between these two groups of samples could result in similar absorption bands and make it difficult to distinguish the two classes based on the visual interpretation of spectra. However, by a close investigation of 1st derivative spectra (Fig. 4 b), the difference in transmitted light was observed between two classes around the discussed wavelengths. The remarkable deviations between the two classes could be observed around the wavelengths of 730, 770, and 830 nm. While the red-edge wavelengths around 770 nm 33 were related to embryo development 1 , absorption around 830 nm is triggered by the 3rd overtone of the O–H stretching, related to the water content inside of eggs 31 . Finally, the strong absorption around 930 nm can be associated with 3rd overtones of the C-H stretching absorption, which may be related to the carbohydrate content in the egg 34 .
Classification by SIMCA
Table 1 shows the test set validation results of SIMCA analysis performed by using entire spectral data for the detection of infertile and fertile eggs before incubation. The effect of different pre-processing and the optimum number of PCs for modeling each class were also presented in this table. Among the various mathematical pretreatments, the best discrimination accuracy was achieved from the 1st and 2nd derivatives. In both pretreatments, all fertile eggs were correctly classified (sensitivity of 100%) and 6 infertile eggs were misclassified into fertile (specificity of 68.42%), resulting in an accuracy of 86.67%. Both pretreatments led to a similar precision of 81.25%. Due to the higher price of fertile eggs, it is more important to correctly identify the fertile eggs prior to incubation to avoid unwanted elimination and increase the hatchability rate. Therefore, the best SIMCA model had promising performance in the correct identification of this group of eggs.
In terms of fertile and infertile detection of eggs before incubation by HSI, our predictions based on the SIMCA methods were much better than those reported by Smith, et al. 13 (accuracy of 63% for day 0, before incubation), Lin, et al. 6 (accuracy of 96% for day 14, after incubation), and Park, et al. 1 (accuracy of 99% for day 14, after incubation). In one case, however, our best SIMCA model was less accurate than that presented by Liu and Ngadi (2013) in which a near-infrared HSI system in the range of 900–1700 nm was used for the detection of fertile and infertile eggs prior to incubation. The model extracted by them was able to correctly classify all fertile and infertile eggs into the right classes with an overall accuracy of 100%. Despite the same performance in the detection of fertile eggs (100%), our best SIMCA model was weaker in the detection of infertile ones (specificity of 68.42%). The limited waveband (430–960 nm) in our study provided by a lower-price hyperspectral camera can be regarded as one of the main reasons for obtaining these results. Nevertheless, in the study of Liu and Ngadi (2013), there was an unequal distribution of samples per class on day 0 (prior to incubation), where a total of 18 infertile eggs were used against 156 fertile ones in the training and validating datasets. There was no justification for solving the imbalanced classification problems. The main challenge here was how much the use of advanced multivariate techniques could compensate the drawback of the shorter spectral range.
Figure 5 shows the discrimination power plot of different wavelengths for the separation of the fertile (day 0) class from the infertile one obtained from the best model by 1st derivative pretreatment. This plot shows which wavelengths were most effective for distinguishing between two classes. As a rule, variables that resulted in a discrimination power of more than 3 are very useful in distinguishing between two classes 22 . These variables were specified by distinct regions and relatively narrow bands in Fig. 5 , represented by R 1 (673–675 nm), R 2 (813–840 nm), and R 3 (865–873 nm). Additionally, the discrimination power values around the wavelengths of 799 nm were close to 3, indicating the relative importance of this wavelength. It is noteworthy that important areas were located mostly at the end of the visible region (R 1 ) and NIR region (799 nm, R 2 , and R 3 ). The effect of fertility on wavelengths around 799 nm can be related to the possible formation of blood spots due to the initial stages of embryo development. By examining the PCA images, Park, et al. 1 observed a distinct blood vessel pattern in the viable eggs and reported a height weighting coefficient value of around 799 nm in the corresponding PCA loading plot.
Moreover, it seems that the presence of the embryo could influence the NIR region more significantly than the visible one (Fig. 5 ). It can be due to the possible changes in the egg’s chemical composition because of embryo existence 17 , 30 , 35 , making the NIR region more important in separating fertile eggs from infertile ones. The initial parts of the spectrum, especially the wavelength range of 500–650 nm had a lower degree of importance for separation between the two classes.
Among the distinguished regions, the R 2 region, including the important wavelength of 830 nm, resulted in the highest discrimination power values (Fig. 5 ). By referring to Fig. 4 , the wavelengths around 830 nm provided a relatively strong absorption for both fertile and infertile eggs with the remarkable absorption difference between these two groups of samples. As indicated in the overview of spectral data section, the absorptions around 830 nm can be related to 3rd overtones of the O–H stretching absorption, which according to egg ingredients can be attributed to the carbohydrates and water content of the egg. Since the embryo consumes carbohydrates, proteins, and fats during its development 32 , the higher discrimination power values of the R 2 region were likely due to the relatively lower carbohydrate, protein, and fat contents of fertile samples with respect to the infertile ones.
Discrimination by LDA and QDA
Table 2 summarizes the test set validation results of QDA and LDA classifiers obtained by various mathematical pretreatments. Similar to SIMCA analysis, the best discrimination accuracy was achieved by the 1st and 2nd derivatives. In the best QDA and LDA models, all of the fertile eggs were correctly classified (sensitivity of 100%) and 5 infertile eggs were misclassified into fertile (specificity of 73.68%), resulting in an accuracy of 88.90%. The maximum precision of both models reached 83.87%. In comparison with the SIMCA analysis with the accuracy and precision of 86.67% and 81.25% respectively, the best QDA and LDA models resulted in slightly better performance (accuracy of 88.90% and precision of 83.87%), indicating 2.6% and 3.2% improvement in discrimination accuracy and precision, respectively. The relatively similar performance of the LDA and QDA methods, when compared to the SIMCA method, suggested that more advanced approaches, capable of better capturing the potential nonlinear nature of the data, are necessary to develop more robust predictive models.
Linear classification by effective wavelengths
Table 3 summarizes the results of linear classifiers, including SIMCA, LDA, and QDA, for fertility detection using the selected regions (R 1 , R 2 , and R 3 ) and the spectral differences between two pairs of wavelengths in these selected regions as input variables. As shown, employing the effective wavelengths in the selected regions did not yield promising performances across all classifiers. In the best-case scenario, SIMCA achieved a sensitivity of 61.54%, specificity of 73.68%, precision of 76.19%, and accuracy of 66.67%. It appears that the wavelength variables situated outside the selected region contained some information to describe the variance of the class variable (fertile or infertile), essential for developing a reliable linear model. Furthermore, the utilization of selected variables in their raw form, combined with their inclusion in linear models lacking the capacity to address nonlinear data, contributes to the challenges encountered in fertility detection using selected regions.
However, when utilizing wavelength differences as a simulation of 1st derivative pretreatment, predictability improved, although it remained lower than the models developed using the entire spectra as the input variable. For SIMCA, LDA, and QDA, accuracy values of 71.11%, 84.44%, and 82.22% were obtained, respectively (Table 3 ). In comparison, the corresponding values when using the entire spectral data were 86.67%, 88.89%, and 88.89%, respectively (Tables 1 and 2 ). It seems that spectral differences can effectively simulate the 1st derivative pretreatment in distinct wavelength variables. Nevertheless, linear classification still does not reach the same level of predictability as when the entire spectral data is utilized.
Classification by ANN
Table 4 shows the results ANN classifier obtained from the total spectral data, effective wavelengths in selected regions (R 1 , R 2 , and R 3 ), and spectral differences between two pairs of wavelengths in selected regions, as the input variables. Figure 6 illustrates the classification accuracy of ANNs with the different number of nodes in the hidden layer varying from 1 to 30. In using the total spectral data as the inputs to networks (Fig. 6 a), the best prediction accuracy was achieved by 10 nodes in the hidden layer. This network was able to separate the fertile egg from the infertile one with sensitivity, specificity, precision, and accuracy of 96.15%, 84.21%, 89.29%, and 91.11% (Table 4 ). As shown, a noticeable improvement in the prediction power of infertile eggs occurred when the nonlinear ANN method was applied to the whole spectral data. While the best linear model could distinguish 73.68% of infertile eggs, 84.21% of infertile eggs could be successfully discriminated by the best ANN model based on total spectral data. This showed a 14.3% improvement in the specificity of the predictive model.
Figure 7 illustrates the sensitivity plot based on the best ANN model developed by the total spectral data. As shown, the wavelengths around 524, 656, 767, and 860 nm resulted in the highest degree of importance in the sensitivity analysis. These wavelengths were also close to the distinguished ones that appeared in Fig. 4 as the absorption valleys in transmission spectra. Moreover, the wavelengths around 665 and 865 nm were identified as the efficient wavelengths in the discrimination power plot of SIMCA analysis. The coincidence in the identified wavebands between linear (SIMCA) and nonlinear (ANN) approaches highlighted the importance of the selected regions R 1 and R 3 in detecting fertility in unincubated chicken eggs.
In using the selected regions (Fig. 6 b) and spectral differences (Fig. 6 c) as the input variables to the ANN classifier, the optimized topologies of networks were attained with 11 and 7 nodes in the hidden layer, respectively. Despite a slight difference in precision values attained by selected regions and their spectral differences (92.59% and 89.65%, respectively), similar accuracies (accuracy of 93.33%) were achieved when the selected regions and spectral differences were used as the input variables (Table 4 ). Interestingly, and in contrast to the linear classifiers, the accuracy of prediction slightly improved when employing fewer variables from effective regions or spectral differences as input for nonlinear ANN methods, compared to using the full range data (accuracy of 91.11%). This improvement may be attributed to the elimination of unnecessary and irrelevant data that could otherwise have a detrimental impact on the ANN models.
Among the best networks, the ANN model developed by the effective region was the most successful one in separating the infertile eggs with a specificity of 89.47%. Only 2 infertile eggs were misclassified as fertile. However, the model developed by spectral differences was more efficient in predicting the fertile eggs in which all of them were correctly classified (sensitivity of 100%).
On the whole, the ANN method outperformed the other classifiers in detecting fertility. Compared to the best SIMCA and LDA/QDA models, the accuracy was improved by 7.7% and 5% when the best ANN model was used to discriminate the eggs. | Results and discussion
Overview of spectral data
The raw and 1st derivative of average transmission spectra of fertile and infertile eggs are shown in Fig. 4 . As shown, the fertile samples had higher raw transmission values than the infertile ones in all spectral regions (Fig. 4 a). Differences in chemical substances, such as protein content 18 , 19 , and physical properties, like the egg shape index 8 , between fertile and infertile chicken eggs could lead to variations in the transmission spectral behavior. Similar transmission values and trends in the spectra of fertile and infertile eggs were observed in the studies performed by conventional transmission spectroscopy systems 8 , 15 , 29 . However, more absorption bands appeared in transmission spectra obtained by the HSI system. As shown in Fig. 4 , in addition to broad absorption bands, significant absorptions (valleys in transmission spectra) were revealed in the spectra of fertile and infertile eggs around the wavelengths of 580, 634, 665, 730, 770, 830, and 930 nm. Since the raw spectra had relatively broad absorption bands, to improve the quality of presentation, the 1st derivative spectra were calculated (Fig. 4 b).
As shown in Fig. 4 , extensive absorption bands in the visible regions were observed around 580, 634, and 665 nm wavelengths. Considering the white shell of eggs used in this study, these absorptions cannot be attributed to the shell color pigment of protoporphyrin which produces relatively strong absorption in brown-shelled egg spectra in the mentioned wavelengths 15 , 29 . Therefore, these broad absorptions were likely due to the color pigments in the yolk such as carotenoids which are a fat-soluble group of yellow (580 nm), orange (634 nm), and red (665 nm) pigments 30 .
The wavebands around 730 nm are related to the third overtone of O–H in water 31 and the carbohydrates of eggs. Since the embryo consumes carbohydrates, proteins, and fats 32 , the higher transmission value of fertile samples around 730 nm is likely due to the relatively lower carbohydrate, protein, and fat contents of fertile samples with respect to the infertile ones (Fig. 4 a).
The relatively similar internal conditions and common compounds between these two groups of samples could result in similar absorption bands and make it difficult to distinguish the two classes based on the visual interpretation of spectra. However, by a close investigation of 1st derivative spectra (Fig. 4 b), the difference in transmitted light was observed between two classes around the discussed wavelengths. The remarkable deviations between the two classes could be observed around the wavelengths of 730, 770, and 830 nm. While the red-edge wavelengths around 770 nm 33 were related to embryo development 1 , absorption around 830 nm is triggered by the 3rd overtone of the O–H stretching, related to the water content inside of eggs 31 . Finally, the strong absorption around 930 nm can be associated with 3rd overtones of the C-H stretching absorption, which may be related to the carbohydrate content in the egg 34 .
Classification by SIMCA
Table 1 shows the test set validation results of SIMCA analysis performed by using entire spectral data for the detection of infertile and fertile eggs before incubation. The effect of different pre-processing and the optimum number of PCs for modeling each class were also presented in this table. Among the various mathematical pretreatments, the best discrimination accuracy was achieved from the 1st and 2nd derivatives. In both pretreatments, all fertile eggs were correctly classified (sensitivity of 100%) and 6 infertile eggs were misclassified into fertile (specificity of 68.42%), resulting in an accuracy of 86.67%. Both pretreatments led to a similar precision of 81.25%. Due to the higher price of fertile eggs, it is more important to correctly identify the fertile eggs prior to incubation to avoid unwanted elimination and increase the hatchability rate. Therefore, the best SIMCA model had promising performance in the correct identification of this group of eggs.
In terms of fertile and infertile detection of eggs before incubation by HSI, our predictions based on the SIMCA methods were much better than those reported by Smith, et al. 13 (accuracy of 63% for day 0, before incubation), Lin, et al. 6 (accuracy of 96% for day 14, after incubation), and Park, et al. 1 (accuracy of 99% for day 14, after incubation). In one case, however, our best SIMCA model was less accurate than that presented by Liu and Ngadi (2013) in which a near-infrared HSI system in the range of 900–1700 nm was used for the detection of fertile and infertile eggs prior to incubation. The model extracted by them was able to correctly classify all fertile and infertile eggs into the right classes with an overall accuracy of 100%. Despite the same performance in the detection of fertile eggs (100%), our best SIMCA model was weaker in the detection of infertile ones (specificity of 68.42%). The limited waveband (430–960 nm) in our study provided by a lower-price hyperspectral camera can be regarded as one of the main reasons for obtaining these results. Nevertheless, in the study of Liu and Ngadi (2013), there was an unequal distribution of samples per class on day 0 (prior to incubation), where a total of 18 infertile eggs were used against 156 fertile ones in the training and validating datasets. There was no justification for solving the imbalanced classification problems. The main challenge here was how much the use of advanced multivariate techniques could compensate the drawback of the shorter spectral range.
Figure 5 shows the discrimination power plot of different wavelengths for the separation of the fertile (day 0) class from the infertile one obtained from the best model by 1st derivative pretreatment. This plot shows which wavelengths were most effective for distinguishing between two classes. As a rule, variables that resulted in a discrimination power of more than 3 are very useful in distinguishing between two classes 22 . These variables were specified by distinct regions and relatively narrow bands in Fig. 5 , represented by R 1 (673–675 nm), R 2 (813–840 nm), and R 3 (865–873 nm). Additionally, the discrimination power values around the wavelengths of 799 nm were close to 3, indicating the relative importance of this wavelength. It is noteworthy that important areas were located mostly at the end of the visible region (R 1 ) and NIR region (799 nm, R 2 , and R 3 ). The effect of fertility on wavelengths around 799 nm can be related to the possible formation of blood spots due to the initial stages of embryo development. By examining the PCA images, Park, et al. 1 observed a distinct blood vessel pattern in the viable eggs and reported a height weighting coefficient value of around 799 nm in the corresponding PCA loading plot.
Moreover, it seems that the presence of the embryo could influence the NIR region more significantly than the visible one (Fig. 5 ). It can be due to the possible changes in the egg’s chemical composition because of embryo existence 17 , 30 , 35 , making the NIR region more important in separating fertile eggs from infertile ones. The initial parts of the spectrum, especially the wavelength range of 500–650 nm had a lower degree of importance for separation between the two classes.
Among the distinguished regions, the R 2 region, including the important wavelength of 830 nm, resulted in the highest discrimination power values (Fig. 5 ). By referring to Fig. 4 , the wavelengths around 830 nm provided a relatively strong absorption for both fertile and infertile eggs with the remarkable absorption difference between these two groups of samples. As indicated in the overview of spectral data section, the absorptions around 830 nm can be related to 3rd overtones of the O–H stretching absorption, which according to egg ingredients can be attributed to the carbohydrates and water content of the egg. Since the embryo consumes carbohydrates, proteins, and fats during its development 32 , the higher discrimination power values of the R 2 region were likely due to the relatively lower carbohydrate, protein, and fat contents of fertile samples with respect to the infertile ones.
Discrimination by LDA and QDA
Table 2 summarizes the test set validation results of QDA and LDA classifiers obtained by various mathematical pretreatments. Similar to SIMCA analysis, the best discrimination accuracy was achieved by the 1st and 2nd derivatives. In the best QDA and LDA models, all of the fertile eggs were correctly classified (sensitivity of 100%) and 5 infertile eggs were misclassified into fertile (specificity of 73.68%), resulting in an accuracy of 88.90%. The maximum precision of both models reached 83.87%. In comparison with the SIMCA analysis with the accuracy and precision of 86.67% and 81.25% respectively, the best QDA and LDA models resulted in slightly better performance (accuracy of 88.90% and precision of 83.87%), indicating 2.6% and 3.2% improvement in discrimination accuracy and precision, respectively. The relatively similar performance of the LDA and QDA methods, when compared to the SIMCA method, suggested that more advanced approaches, capable of better capturing the potential nonlinear nature of the data, are necessary to develop more robust predictive models.
Linear classification by effective wavelengths
Table 3 summarizes the results of linear classifiers, including SIMCA, LDA, and QDA, for fertility detection using the selected regions (R 1 , R 2 , and R 3 ) and the spectral differences between two pairs of wavelengths in these selected regions as input variables. As shown, employing the effective wavelengths in the selected regions did not yield promising performances across all classifiers. In the best-case scenario, SIMCA achieved a sensitivity of 61.54%, specificity of 73.68%, precision of 76.19%, and accuracy of 66.67%. It appears that the wavelength variables situated outside the selected region contained some information to describe the variance of the class variable (fertile or infertile), essential for developing a reliable linear model. Furthermore, the utilization of selected variables in their raw form, combined with their inclusion in linear models lacking the capacity to address nonlinear data, contributes to the challenges encountered in fertility detection using selected regions.
However, when utilizing wavelength differences as a simulation of 1st derivative pretreatment, predictability improved, although it remained lower than the models developed using the entire spectra as the input variable. For SIMCA, LDA, and QDA, accuracy values of 71.11%, 84.44%, and 82.22% were obtained, respectively (Table 3 ). In comparison, the corresponding values when using the entire spectral data were 86.67%, 88.89%, and 88.89%, respectively (Tables 1 and 2 ). It seems that spectral differences can effectively simulate the 1st derivative pretreatment in distinct wavelength variables. Nevertheless, linear classification still does not reach the same level of predictability as when the entire spectral data is utilized.
Classification by ANN
Table 4 shows the results ANN classifier obtained from the total spectral data, effective wavelengths in selected regions (R 1 , R 2 , and R 3 ), and spectral differences between two pairs of wavelengths in selected regions, as the input variables. Figure 6 illustrates the classification accuracy of ANNs with the different number of nodes in the hidden layer varying from 1 to 30. In using the total spectral data as the inputs to networks (Fig. 6 a), the best prediction accuracy was achieved by 10 nodes in the hidden layer. This network was able to separate the fertile egg from the infertile one with sensitivity, specificity, precision, and accuracy of 96.15%, 84.21%, 89.29%, and 91.11% (Table 4 ). As shown, a noticeable improvement in the prediction power of infertile eggs occurred when the nonlinear ANN method was applied to the whole spectral data. While the best linear model could distinguish 73.68% of infertile eggs, 84.21% of infertile eggs could be successfully discriminated by the best ANN model based on total spectral data. This showed a 14.3% improvement in the specificity of the predictive model.
Figure 7 illustrates the sensitivity plot based on the best ANN model developed by the total spectral data. As shown, the wavelengths around 524, 656, 767, and 860 nm resulted in the highest degree of importance in the sensitivity analysis. These wavelengths were also close to the distinguished ones that appeared in Fig. 4 as the absorption valleys in transmission spectra. Moreover, the wavelengths around 665 and 865 nm were identified as the efficient wavelengths in the discrimination power plot of SIMCA analysis. The coincidence in the identified wavebands between linear (SIMCA) and nonlinear (ANN) approaches highlighted the importance of the selected regions R 1 and R 3 in detecting fertility in unincubated chicken eggs.
In using the selected regions (Fig. 6 b) and spectral differences (Fig. 6 c) as the input variables to the ANN classifier, the optimized topologies of networks were attained with 11 and 7 nodes in the hidden layer, respectively. Despite a slight difference in precision values attained by selected regions and their spectral differences (92.59% and 89.65%, respectively), similar accuracies (accuracy of 93.33%) were achieved when the selected regions and spectral differences were used as the input variables (Table 4 ). Interestingly, and in contrast to the linear classifiers, the accuracy of prediction slightly improved when employing fewer variables from effective regions or spectral differences as input for nonlinear ANN methods, compared to using the full range data (accuracy of 91.11%). This improvement may be attributed to the elimination of unnecessary and irrelevant data that could otherwise have a detrimental impact on the ANN models.
Among the best networks, the ANN model developed by the effective region was the most successful one in separating the infertile eggs with a specificity of 89.47%. Only 2 infertile eggs were misclassified as fertile. However, the model developed by spectral differences was more efficient in predicting the fertile eggs in which all of them were correctly classified (sensitivity of 100%).
On the whole, the ANN method outperformed the other classifiers in detecting fertility. Compared to the best SIMCA and LDA/QDA models, the accuracy was improved by 7.7% and 5% when the best ANN model was used to discriminate the eggs. | Conclusion
In this study, the feasibility of a line-scan hyperspectral imaging system in the Vis-SWNIR region was assessed for early detection of non-fertile eggs on day 0 before incubation. After capturing hyperspectral images of eggs, the edge detection method was used to segment ROI, and the image spectral information was extracted from the ROI. Then various pretreatment methods as well as different classifiers including SIMCA, LDA, QDA, and ANN classifiers were applied to eliminate the unwanted information and extract the predictive models. The following conclusions can be drawn from our results: Following the acceptable results of SIMCA analysis along with 1st derivative pretreatment (accuracy of 86.67%), the discrimination power plot was used to select the most informative wavebands in discerning the fertile and non-fertile classes. The best QDA and LDA models resulted in slightly better performances in comparison with the SIMCA analysis. Surprisingly, all the fertile eggs were correctly classified in these models, resulting in a sensitivity of 100%. The best ANN model based on total spectral data was able to separate the fertile eggs from the infertile ones with precision and accuracy of 89.29% and 91.11%, respectively. Whereas, by using the fewer and most informative wavelengths obtained by SIMCA analysis as the input variable to ANN models, an improvement in prediction power occurred. Both wavelength differences and raw spectral data in selected wavebands led to a satisfactory classification accuracy of 93.33%. None of the infertile eggs were misclassified as fertile in the models based on wavelength differences (a sensitivity of 100%). | Detection of infertile eggs prior to incubation can lead to an increase in the hatchability rate and prevent the wastage of billions of non-fertile eggs ended up by failed incubation. In this study, the feasibility of a line-scan hyperspectral imaging system in the visible and short-wavelength near-infrared region was assessed for early detection of non-fertile eggs on day 0 before incubation. A total of 227 white-shell eggs including 131 fertile and 96 infertile eggs were collected from a flock with similar conditions in terms of hen age, feeding, and management. Hyperspectral images of eggs were captured on day 0 before incubation in a transmittance mode of illumination and then the eggs were incubated in a commercial incubator. The edge detection method was used to segment the egg, including both the white and yolk, from the background, and the image spectral information was extracted from the egg region. After applying various pretreatment methods, different classifiers including soft independent modeling of class analogy (SIMCA), linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and artificial neural networks (ANN) classifiers were utilized to extract the predictive models. Following the acceptable results of SIMCA analysis accomplished by 1st derivative pretreatment (accuracy of 86.67%), the discrimination power plot was used to select the most informative wavebands. The results showed that by using fewer variables in effective wavebands better performance (precision and accuracy of 92.59% and 93.33%, respectively) could be obtained in comparison with the ANN classifier based on the whole spectral data (precision and accuracy of 89.29% and 91.11%, respectively). This study revealed the potential application of hyperspectral transmittance imaging in the Vis-SWNIR region to discern the fertile and infertile eggs before starting the incubation process.
Subject terms | Acknowledgements
The authors gratefully appreciate the financial support from Isfahan University of Technology, Isfahan, Iran.
Author contributions
M.G.: Methodology, validation, formal analysis, investigation, data curation, writing—original draft, visualization. S.A.M.: supervision, conceptualization, formal analysis, investigation, data curation, writing—review & editing, funding acquisition. A.A.M.: supervision, conceptualization, writing—review & editing, funding acquisition. M.S.: methodology, investigation, writing—review & editing. M.N.: methodology, resources, writing—review & editing, funding acquisition.
Data availability
The datasets generated and/or analyzed during the current study are not publicly available, but are available from the corresponding author on reasonable request.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:56 | Sci Rep. 2024 Jan 14; 14:1289 | oa_package/fe/21/PMC10787758.tar.gz |
|
PMC10787759 | 38218949 | Introduction
The combustion of fossil fuels for industrialization and transportation, which accompanies the rapid growth of cities, has resulted in the release of massive volumes of CO 2 gas into the atmosphere 1 . However, the excessive emissions of CO 2 will result in global warming, ocean acidification, and other environmental issues 2 , 3 . Therefore, the question of dealing with the CO 2 emitted during industrial manufacturing has become an urgent concern 4 . Meanwhile, the fabrication of affordable, reliable, and sustainable chemicals as one of the sustainable development goals has attracted increasing attention 5 . A feasible and promising solution for long-term sustainable development is the highly selective catalysis of CO 2 into valuable chemicals 6 – 16 , such as light olefins and liquefied petroleum (LPG) 17 – 20 . As important intermediates in the manufacture of organic products, light olefins are one of the most productive chemicals all over the world, with an amount exceeding 250 million tons per year 21 – 23 . Meanwhile, with the worldwide population continuously increasing, the LPG production amount is boosted every year. It is estimated that annual worldwide LPG production will reach 350 million metric tons in 2030, while keep leveling up to 400 million metric tons in 2050 24 . Based on this background, the carbon-neutral production of light olefin and LPG has a great impact and a bright future.
CO 2 hydrogenation, which is a crucial catalytic CO 2 conversion reaction, can occur through the methanol intermediate or Fischer-Tropsch synthesis (FTS) routes. To our knowledge, almost all the LPG synthesis methods by C1 chemistry until now, regardless of whether they used syngas (CO/H 2 ) or CO 2 /H 2 , employed a methanol-intermediated route by combining methanol synthesis catalysts with zeolites 17 – 20 . Almost no Fischer-Tropsch route was reported for LPG synthesis, especially from CO 2 /H 2 . Although methanol as an intermediate pathway can break the Anderson-Schulz-Flory (ASF) distribution and obtain a target product with high selectivity, it often suffers from low CO 2 conversion (10 – 35%) and high CO selectivity (20 – 75%) due to the thermodynamic equilibrium limitation and thus does not meet the needs of industrial production 2 . On the other hand, CO 2 hydrogenation to light olefins is still a hot area, but it faces challenges in increasing CO 2 conversion, suppressing CO by-product selectivity, and enhancing light olefin selectivity. Therefore, a modified Fischer-Tropsch route for light olefin and LPG synthesis that simultaneously maintains a high reaction rate and breaks the ASF distribution is urgently needed.
Iron-based catalysts are the most commonly used catalysts for the FTS due to their high reaction activities in both reverse water gas shift (RWGS) and chain growth reactions 25 – 27 . However, an unmodified iron-based catalyst typically exhibits poor activity and high by-product (CO, CH 4 , C 2 H 6 , etc.) selectivity 28 . To overcome this issue, alkali metal ions, such as K and Na, were added to boost the CO 2 adsorption and the contents of active phases 29 , 30 . Indeed, these modified iron-based catalysts without the use of zeolites presented comparable catalytic performances to those of the zeolite-containing composite catalysts 25 , 31 .
Besides, works on bimetallic catalysts that combined Fe with other active metal components (Co, Cu, Ni, etc.) have also been investigated. Among them, the incorporation of Co to Fe-based catalysts has been proven to enhance the reactivity and target product selectivity 26 , 28 , 29 , 32 , 33 . Deo et al. discovered that the addition of controlled amounts of Co to Fe resulted in high yields of methane 34 . Furthermore, Xu et al. proposed that the generation of active iron-cobalt carbides originating from a ternary ZnCo x Fe 2-x O 4 catalyst was conducive to the formation of light olefins 35 . Recently, our group reported a spinal-like ZnFe 2 O 4 with a small amount of cobalt incorporation for CO 2 conversion and found that the presence of Co 3 Fe 7 sites could facilitate a high-yield production of liquid fuels (26.7% for C 5 + ) 36 . Similarly, Zhang et al. detected that the Na-promoted CoFe alloy benefited the formation of jet fuel 37 . These reports manifest that the combination of Fe and Co can be used as a powerful and efficient catalyst for the selective conversion of CO 2 , and the intimate interaction between cobalt and iron species is able to tune the product distribution. To our knowledge, the supported iron-based catalysts could merely generate one type of hydrocarbon during direct CO 2 hydrogenation. However, given the different intrinsic properties of Fe and Co in the formation of hydrocarbon products, where iron contributes to the alkene production and cobalt contributes to the saturated alkane production 38 , 39 , the rational regulation of Fe and Co active site distribution may play a role in transforming the product types. Furthermore, the introduction of support has been revealed to significantly influence the local environment of the active sites. Even a three-dimensional encapsulation structure of graphene led to a fascinating result 40 , 41 . Based on the above assumptions, the particles of Fe and Co with rational spatial distributions regulated by the graphene support may achieve integrated production of different types of hydrocarbons.
Herein, we report a graphene-fence engineering approach to regulating multiple active sites of Fe-Co bimetallic catalysts for product-switchable CO 2 hydrogenation. Taking advantage of the structural transformation of graphene during the reduction process, a series of graphene-supported Fe-Co bimetallic catalysts with different internal and surface distributions of active sites were successfully synthesized. The Fe-Co active sites tuned the demand for carbon chain growth and olefin secondary hydrogenation, leading to an integrated and switchable process for selective CO 2 hydrogenation to light olefins or LPG. Iron carbides combined with metallic cobalt on the surface of graphene fences could catalyze CO 2 hydrogenation to light olefins (50.1% for C 2 = –C 4 = ) at a conversion of 55.4%. Whereas the scattered spatial active sites of iron carbides and metallic cobalt, separated by graphene fences, achieved LPG (C 3 P –C 4 P ) selectivity of 43.6% at a conversion of 46%. Meanwhile, it created a precedent for CO 2 hydrogenation to LPG via a Fischer-Tropsch pathway and exhibited an ultra-high STY (space-time yield) of LPG (151.0 g kg cat –1 h –1 ), which was much higher than any other previously reported composite methanol-intermediate catalysts (Supplementary Fig. 1 ). In addition, the graphene fences could also protect the metal particles from being deactivated by agglomeration, thus maintaining high activity for a long time in a continuous test. Our research offers methodologies for manipulating the graphene material as fences to divide active nanoparticles and switch product types and sheds light on the rational design of multiple active sites for the synthesis of target chemicals (Supplementary Fig. 2 ). | Methods
Catalyst preparation
Graphene oxide (GO)
K 2 S 2 O 8 (7.5 g, Rgent Chemical Reagent Co.) and P 2 O 5 (7.5 g, Damao Chemical Reagent Co.) were added to a round-bottomed flask under the conditions of an 80 °C water bath and then combined with concentrated H 2 SO 4 (Fuchen Chemical Reagent Co.) by stirring for 15 min. Graphite powder (10 g, Sinopharm Chemical Reagent Co.) was subsequently added to the solution. After stirring for 4.5 h, the mixture was filtered and rinsed until the pH of the supernatant reached 7, before being dried overnight. The dried pre-oxidized graphite was then transferred into a three-necked flask with concentrated H 2 SO 4 in an ice-water bath. Under the aforementioned conditions, KMnO 4 (50 g, Sinopharm Chemical Reagent Co.) was added in these preparation phases. The solution was stirred at 35°C for 3 h before being progressively combined with deionized water and 30% H 2 O 2 until no bubbles occurred, and then aged overnight. The bottom slurry of the solution was transferred to the 3% HCl (Fuchen Chemical Reagent Co.) for acidification treatment. After filtering and washing to neutrality, GO was then moved to deionized water and ultrasonically agitated for 5 h. Finally, dried graphene oxide was obtained using a freeze-drying method.
Graphene-supported Fe-Co bimetallic catalysts
Graphene-supported K-Fe-Co bimetallic catalysts were synthesized using one-pot hydrothermal synthesis and impregnation. The target loadings for the prepared catalysts were 20% Fe, 4% Co, and 1% K, respectively. The ingredients of the obtained catalysts were investigated by inductively coupled plasma optical emission spectrometer (ICP-OES) tests, and the results are shown in Supplementary Table 1 .
In detail, GO (2.0 g), urea (2.0 g, Sinopharm Chemical Reagent Co.), and Fe(NO 3 ) 3 ·9H 2 O (2.4 g, Damao Chemical Reagent Co.) were dissolved in a mixture of ethylene glycol (40 mL, Hengxing Chemical Preparation Co.) and deionized water (290 mL) and then stirred and ultra-sounded for 2 h. The obtained liquid was transferred into a Teflon-lined stainless-steel autoclave, followed by one-pot hydrothermal synthesis at 180 °C for 12 h with rotation. The products were washed and filtered until neutral, then frozen to dry before calcining at 500 °C in a nitrogen atmosphere for 4 h (unless otherwise stated, all drying methods employed were freezing drying methods to maintain the graphene structure). Cobalt and potassium were loaded by impregnating C 10 H 16 CoO 4 (Macklin Biochemical Co.) and K 2 CO 3 (Damao Chemical Reagent Co.) as cobalt and potassium sources, respectively. The amount of the impregnated material was calculated according to the given contents. The obtained catalyst was dried and calcined at 500 °C in a nitrogen atmosphere for 4 h. The as-prepared catalyst was designated as GO-Fe/K-Co. It should be noted that the portion before the slash represented the elements loaded by the one-pot hydrothermal method, while the portion after the slash represented the elements loaded by the impregnation method.
GO/K-Fe-Co was prepared in the following steps: Hydrothermal treatment of GO and urea was first performed, followed by impregnating K 2 CO 3 , Fe(NO 3 ) 3 ·9H 2 O, and C 10 H 16 CoO 4 as K, Fe, and Co sources onto the calcined catalyst. The drying and calcination processes after the hydrothermal synthesis and the impregnation remained unchanged.
In the same way, GO-Co/K-Fe and GO-Fe-Co/K catalysts were synthesized with the given loadings by changing the orders of material addition via the hydrothermal or impregnation methods without altering any other preparation techniques or stages.
To examine the influence of Fe contents, GO-Fe/K-Co with target Fe loadings of 25% and 30% were prepared and denoted as GO-25Fe/K-Co and GO-30Fe/K-Co, respectively. In addition, GO/K-10Fe-20Co was synthesized by loading 10% Fe and 20% Co with an impregnation process. Unlike the GO/K-Fe-Co catalyst mentioned above, Fe here was first impregnated onto the graphene surface, and then Co was loaded.
Graphene-supported Fe catalysts
GO-Fe/K and GO/K-Fe catalysts without the Co addition were also fabricated, in which the loadings for Fe and K were 20 wt% and 1 wt%, respectively.
Other carbon materials supported Fe-Co bimetallic catalysts
By adding the same amounts of K 2 CO 3 , Fe(NO 3 ) 3 ·9H 2 O, and C 15 H 21 CoO 6 to those of the GO-Fe/K-Co catalyst for comparison, carbon nanotubes (CNTs, Macklin Biochemical Co.), carbon black (CB, Macklin Biochemical Co.), and activated carbon (AC, Damao Chemical Co.) were utilized as supports for the preparation of Fe-Co bimetallic catalysts. These catalysts were designated as CNTs-Fe/K-Co, CB-Fe/K-Co, and AC-Fe/K-Co, respectively.
rGO
Only GO (2.0 g) and urea (2.0 g) were dissolved in the mixture of ethylene glycol (40 mL) and deionized water (290 mL), agitated, and transferred into a Teflon-lined stainless steel autoclave, where a one-pot hydrothermal synthesis was carried out for 12 h at 180 °C. The resulting material was washed, dried, and calcined, the resultant graphene was labeled as rGO.
rGO-Fe/K-Co
The same synthetic process as for GO-Fe/K-Co was applied to the synthesis of rGO-Fe/K-Co, but the raw ingredient was rGO.
Characterization
The actual total loadings of Fe, Co, and K in different catalysts were established using ICP-OES, which was performed on an Agilent 5110 (OES). The test procedures were as follows: a 10 mg sample was dissolved in a mix solution (2 mL HNO 3 + 6 mL HCl+2 mL HF) overnight. The dissolved sample was then added to a flask and diluted to the scale line. Five internal concentration standard solutions (0.5 mg/L, 1 mg/L, 3 mg/L, 5 mg/L, and 10 mg/L) were analyzed, and a standard curve was formed.
To acquire diffraction patterns, an X-ray diffractometer (XRD) with Cu Kα radiation was employed over a Rigaku RINT 2400 instrument (Scan angle: 5°–90°; scan speed: 2°/min; voltage and current: 40 kV and 40 mA). In situ XRD measurement was conducted on a SmartLab-TD diffraction system using a Cu Kα source with an XRK 900 heater. The reduction was carried out under the conditions of pure hydrogen and a temperature range of 25 – 400 °C. The carbonization process was performed on the reaction gas (CO 2 /H 2 ) at 320 °C. Surface morphologies of the catalysts were examined using scanning electron microscopy (SEM, JEOL JSM-IT700HR), and transmission electron microscopy (TEM, JEOL JEM-2100F) was utilized to observe the morphologies and elemental mapping of the catalysts at an acceleration voltage of 100 kV. FIB (Focused ion beam)-SEM was performed using a double-beam electron microscope (Helios G4 PFIB CXe). The surface area of the catalysts was determined using N 2 adsorption-desorption experiments at –196 °C (Micromeritics 3Flex ASAP 2460). Prior to the tests, the samples were vacuum-degassed for 8 h at 240 °C. The M-H properties of the fresh catalysts were measured with a vibrating sample magnetometer (VSM, LakeShore 7404).
H 2 temperature-programmed reduction (H 2 -TPR) tests were performed using a BELCAT-II-T-SP analyzer with a thermal conductivity detector (TCD). Helium was used as a pretreatment gas for the sample of 30 mg for 1 h at 300 °C. A gas mixture (5% H 2 /Ar) was then delivered to the reactor at a rate of 30 mL/min when the temperature was decreased to 50 °C. Finally, H 2 -TPR curves were obtained at temperatures ranging from 50 to 900 °C with a heating rate of 10 °C per minute. CO 2 or H 2 temperature-programmed desorption (TPD) tests were also investigated using the same apparatus. 30 mg of the sample was reduced for 2 h at 400 °C under a 100% H 2 gas flow (30 mL/min). The temperature of the reactor was reduced to 50 °C under a He gas flow (30 mL/min) after reduction. The reactor was subsequently filled with a 100% CO 2 or 5% H 2 /Ar gas mixture for 1 h. He gas was then introduced into the reactor to remove the physically-adsorbed CO 2 or H 2 . The CO 2 -TPD and H 2 -TPD curves were recorded from 50 to 900 °C with a heating rate of 10 °C per minute. C 3 H 6 -pulse transient hydrogenation experiments were performed on the spent catalysts. Before experiments, the spent catalysts were pretreated in pure H 2 at 350 °C for 2 h to activate the surface. And then the system was cooled to 320 °C in the Ar stream. After that, the samples were exposed to pure H 2 . As the 10% C 3 H 6 /90% Ar gas was pulsed into the reactor, CH 4 ( m / z = 16), C 3 H 6 ( m / z = 42), and C 3 H 8 ( m / z = 44) transient signals were detected by a mass spectrometer.
For the X-ray photoelectron spectroscopy (XPS) analyses, an X-ray photoelectron spectrometer (KRATOS, Axis Ultra DLD) instrument was utilized, equipped with a catalyst pretreatment chamber for altering the gas composition. The excitation source was Al Kα ray (hv =1486.6 eV).
The 57 Fe Mössbauer spectra were recorded on an SEE Co W304 Mössbauer spectrometer, using a 57 Co/Rh source in transmission geometry. The data were fitted using the MossWinn 4.0 software. Fourier transform infrared spectroscopy (FTIR) was conducted on a Thermo Scientific Nicolet iS20 IR spectrometer. The samples were finely milled, evenly combined with KBr, and pelletized. The spectral resolution was 4 cm – 1 , and 32 scans were recorded for each spectrum. The Raman spectra were recorded at room temperature on a HORIBA Scientific LabRAM HR Evolution Raman spectrometer.
Co K-edge analyses were carried out with Si (111) crystal monochromators at the BL11B beamlines at the Shanghai Synchrotron Radiation Facility (SSRF) (Shanghai, China). Before the examination at the beamline, samples were compressed into thin sheets of 1 cm in diameter and sealed with Kapton tape film. The EXAFS spectra were captured using a 4-channel Silicon Drift Detector (SDD) Bruker 5040 at room temperature. The Co K-edge extended X-ray absorption fine structure (EXAFS) spectra were recorded in the transmission mode. Two scans were conducted for each sample, and negligible changes in the line shape and peak position of the Co K-edge XANES spectra were observed between the two scans. The EXAFS spectra of these standard samples (Co, CoO, and Co 3 O 4 ) were also recorded in the transmission mode. The spectra were processed and analyzed using the software codes Athena and Artemis.
Catalyst tests
Granular catalysts of 0.12 g (20–40 meshes) mixed with quartz sand of 0.5 g were used to evaluate the catalytic performance in a fixed-bed reactor. On top of the catalyst bed, 1 g glass beads are applied to adjust the bed height and preheat the reaction gas. The two ends of the catalyst bed and glass beads were separated by quartz cotton. Prior to the reaction, the catalyst was reduced for 8 h at 400 °C with pure H 2 of 30 mL/min. After reduction, the reactor was cooled to room temperature. The reactor was then filled with CO 2 /H 2 /Ar (27.0/68.0/5.0) reactant gas, and the temperature and pressure of the system were gradually raised to 320 °C and 3.0 MPa, respectively, and W cat /F CO2+H2 was 4.5 g h mol –1 .
To collect the heavy hydrocarbons and eliminate the water generated by the reaction, an ice trap was placed between the reactor and the back pressure valve, and an octane of 2 g was added to the ice trap in order to absorb heavy hydrocarbons. At the end of the reaction, the product in the ice trap was collected, and dodecane of 0.1 g and 2-butanol of 0.1 g were added as internal standards to the oil and water phases, respectively. An off-line gas chromatograph (Shimadzu GC-2014) equipped with a flame ionization detector (FID) and a DB-1 capillary column was used to examine the heavy hydrocarbons and water phase product. Two online gas chromatography systems (GL Sciences GC320 and Shimadzu GC-2014) were used to identify the gas-phase products: one had a thermal conductive detector (TCD, GC320) and an active charcoal column for analyzing Ar, CO, CH 4 , and CO 2 , while the other had an FID (GC-2014) and a GS-ALUMINA capillary column for analyzing light hydrocarbons.
Statistics and reproducibility
We repeated the main catalysts for the catalyst test. All the experimental results can be reproduced within a small margin of error. No statistical method was used to predetermine the sample size. No data were excluded from the analyses. The experiments were not randomized. The investigators were not blinded to allocation during experiments and outcome assessment. | Results
Adjustable spatial distribution of multiple sites
A series of graphene-supported Fe-Co bimetallic catalysts with identical total contents of Fe, Co, and K were synthesized by varying the addition order of Fe and Co during the hydrothermal and impregnation processes (Supplementary Fig. 3 ). Upon examining the as-prepared catalysts, only the characteristic diffraction peaks ascribed to rGO and Fe 2 O 3 were observed in XRD (Fig. 1a ). There were no peaks associated with Co detected in the four graphene-supported catalysts. To estimate the total contents of different metal elements, we carried out the inductively coupled plasma-optical emission spectrometer (ICP-OES) tests and found that the contents of Fe, Co, and K were close to the theoretical values of 20 wt%, 4 wt%, and 1 wt%, respectively (Supplementary Table 1 ). Although the total contents of each metal were roughly the same for different catalysts, the unique structure separated by graphene fences formed in the hydrothermal process led to different spatial distributions of Fe and Co sites in the inner and surface layers.
Throughout the hydrothermal process, the decrease in graphene layer distances and the cross-linking of the graphene layers led to the dynamic transformation of GO from the 2D lamellar structure to the 3D stereoscopic structure 40 , 41 . To demonstrate the dynamic evolution, we employed in situ XRD to detect the diffraction peak shift of GO during the temperature-programmed reduction process. With the temperature increasing and H 2 introduction, diffraction peaks gradually shifted to the higher direction (Supplementary Fig. 4 ), representing the decrease in graphene layer spacings as determined by Bragg’s law 42 . Similarly, the rGO obtained by hydrothermal treatment also exhibited a higher peak and a smaller layer spacing compared with GO (Fig. 1b ). Meanwhile, the specific surface area significantly decreased (Supplementary Table 2 ). These phenomena corresponded to the folding and bending of the graphene layers, as observed in SEM images (Fig. 1c ).
Accordingly, due to the cross-linking effect of the graphene layers during the hydrothermal process, the metals added together with GO were partially encapsulated in the folded inner layers and in situ replaced the oxygen-containing groups of the graphene layers. Consequently, the “graphene fences” were formed by the reduced graphene layers, which encased metal nanoparticles. After the hydrothermal treatment, the metals introduced by impregnation were more easily loaded onto the surface of the folding graphene layers instead of the internal layers, owing to the separation effects of the graphene fences. These unique structures were reflected in the molecular vibration spectra, surface and internal element contents, and morphological characterizations of the catalysts.
We employed the FTIR spectra to identify the loss of oxygen-containing groups replaced by metal sites (Fig. 1d ). The stretching vibrations of O − H (3400 cm –1 ), C = O (1722 cm –1 ), aromatic C = C (1620 cm –1 ), carboxyl O = C − O (1356 cm –1 ), epoxyl C − O (1220 cm –1 ), and alkoxyl C − O (1050 cm −1 ), which were recorded as references, were all observed in the FTIR spectrum of GO. After hydrothermal synthesis, the oxygen-containing group peaks of rGO were significantly diminished, and two broad peaks at 1550 cm –1 and 1175 cm –1 appeared, which were assigned to C = C and C–O, respectively 43 . Intriguingly, the vibration peaks of metal-supported catalysts displayed smaller areas compared to those of rGO, especially in GO-Fe/K-Co and GO-Fe-Co/K, which could be interpreted as more substitutions of oxygen-containing groups by metals in the reduction process, caused by the metals loading the graphene inner layers. However, the metals in GO/K-Fe-Co and GO-Co/K-Fe were unable to adequately replace the oxygen-containing groups inside the layers due to the protection of graphene fences, thus resulting in a larger peak area.
Furthermore, the thickness of the graphene-supported Fe-Co bimetallic catalysts could also reveal the various structures. The thicknesses of the graphene were detected by the second-order peak positions of the Raman spectra, which appear near 2700 cm –1 . In general, a higher peak position represents a bigger graphene thickness and more graphene layers 44 , 45 . GO and rGO displayed the maximum and minimum graphene thicknesses of graphene layers, respectively (Fig. 1e ). With the incorporation of Fe and Co, the thicknesses of graphene increased compared with rGO, among which the GO-Fe-Co/K showed the largest thickness, revealing that the presence of both Fe and Co on the inner layers hindered the compression of the graphene layers during the hydrothermal process. As inferred, GO/K-Fe-Co showed the smallest thickness of all the graphene-supported catalysts. I D / I G values displayed the reverse order as the thicknesses, indicating that the disorders increased when the graphene thicknesses were compressed (Fig. 1e ).
The varying Fe-Co distributions were further demonstrated by the different metal contents between the surface and interior. Based on the XPS results, the Fe surface contents in GO-Co/K-Fe (12.3%) and GO/K-Fe-Co (11.7%) catalysts whose Fe was introduced by impregnation were distinctly higher than those of the catalysts whose Fe was loaded by hydrothermal incorporation, such as GO-Fe-Co/K (3.4%) and GO-Fe/K-Co (4.1%) (Supplementary Table 3 ). Meanwhile, the GO/K-Fe with a higher surface Fe content (Supplementary Table 3 ) exhibited a stronger magnetism intensity in the M-H loop compared to the GO-Fe/K (Supplementary Fig. 5 ), which ulteriorly supported the conclusion that Fe added by the hydrothermal process was partially encased in the graphene layers, resulting in a lower surface content and a weaker magnetism intensity. Furthermore, the Fe/Co values depicted by SEM mapping, which also reflected the surface element contents, exhibited the same trend as those measured by XPS (Supplementary Fig. 6 and Supplementary Table 4 ). In addition to the surface content characterizations, we applied FIB-SEM to directly investigate the metal distributions in the graphene inner layers. In the cross-sectional SEM images of GO-Fe/K-Co, the metal nanoparticles were found to be loaded between the graphene layers (Supplementary Fig. 7a ), which corresponded to the oxide and iron mapping distributions (Supplementary Fig. 7b , c ), while only a little cobalt distribution was observed on the internal metal particles (Supplementary Fig. 7d ). Meanwhile, the Fe/Co value of the cross-section (8.42) obtained by the SEM mapping (Supplementary Table 5 ) was significantly higher than that of the catalyst surface (0.82) (Supplementary Table 4 ). The various metal distributions on the surface and interior effectively corroborated the reconstruction of the graphene-supported catalysts, forming a unique structure with different spatial distributions of multiple active sites.
Additionally, as shown in Fig. 1f , the yellow circles displayed the same Fe and Co distributions in GO-Fe/K-Co, while in the red circles only the Fe distribution was clearly observed. This could be explained by the difficulty of impregnating Co in the folded interior graphene layers as compared to Fe loaded by hydrothermal synthesis, thus forming the active sites with different metal compositions. As a result, the yellow and red circles represented the surface and internal metal sites of the graphene layers, respectively. By contrast, Fe and Co in GO/K-Fe-Co exhibited a uniform and well-dispersed distribution over the outer surface of graphene layers (Fig. 1f ). These findings convincingly verified that the spatial distributions of Fe-Co active sites could be regulated by the graphene fences, forming scattered sites (Fe+CoFe) or uniform assemblage sites (FeCo) (Fig. 1f ).
In response to the descriptions mentioned above, a detailed schematic diagram of dynamic evolution regarding the synthesis of GO/K-Fe-Co and GO-Fe/K-Co was drawn in Fig. 1g . In terms of the GO/K-Fe-Co catalyst, GO was first reduced by the hydrothermal process, leading the graphene layers to be twisted and folded, thus forming graphene fences. Subsequently, Fe and Co were incorporated through impregnation and primarily dispersed on the surface of the graphene fences with a benign distribution, on account of the resistance of the folded interior layers. In the case of the GO-Fe/K-Co catalyst, Fe was introduced during the hydrothermal process, which would in situ replace the oxygen-containing groups and be partially encapsulated in the interior graphene layers. Then the impregnated Co was loaded on the exterior graphene layers because of the inaccessible accesses controlled by the graphene fences, forming scattered sites with a small amount of Fe on the outer surface of the graphene layers. The two different spatial distribution structures described above could be directly observed by the SEM images. Figure 1h clearly showed Fe-Co homogeneity active sites arranged uniformly on the surface of graphene layers (GO/K-Fe-Co) and the metal sites separated by graphene fences (GO-Fe/K-Co), in which the yellow circle represented the sites on the surface layer and the red circles represented the sites on the internal layer. Therefore, with the assistance of developed graphene fences, graphene-supported Fe-Co bimetallic catalysts with adjustable metal spatial distributions in the internal and external layers were successfully synthesized.
Phase composition characterizations of Fe-Co catalysts
Several characterizations were first applied to determine the surface metal phase compositions of the as-prepared catalysts. The HR-TEM images (Supplementary Fig. 8 ) revealed the lattice spacings of 0.25 nm that corresponded to the (119) plane of the Fe 2 O 3 species in the four graphene-supported catalysts, which was consistent with the XRD results shown in Fig. 1a 46 . The XPS Fe 2 p spectra (Supplementary Fig. 9 ) showed binding energy peaks at 710.8 eV and 712.8 eV, which were ascribed to the Fe(II) and Fe(III) phases, respectively 47 . Moreover, according to the 57 Fe Mössbauer spectra (Supplementary Fig. 10 and Supplementary Table 6 ), Fe was predominantly present in the form of Fe 3 O 4 , which was the combination of Fe 2+ and Fe 3+ species. Due to the low contents or benign distributions, there were no diffraction peaks attributed to Co in XRD patterns, whereas two peaks assigned to Co 2+ /Co 3+ and Co 0 appeared in the Co 2 p XPS spectra (Supplementary Fig. 11 ) 48 . To further ascertain the Co proportion on the catalytic surface, XANES spectra with fitting curves were recorded. As pictured in Supplementary Fig. 12 , CoO was identified as the predominant phase of Co species regardless of their spatial distributions. Besides, the Co K-edge EXAFS curve fitting results and parameters were also displayed in Supplementary Fig. 13 and Supplementary Table 7 , disclosing the existence of the Co–Co and Co–O bonds.
Regarding the spent catalysts, the Fe 2 p spectra showed binding energy peaks at 708.5 eV, which were assigned to the Fe-C bonds (Fig. 2b ), indicating the presence of iron carbides 49 , 50 . Based on the results of XRD patterns, diffraction peaks attributed to Fe 5 C 2 were observed in all spent catalysts (Fig. 2a ). Among them, the diffraction peaks with the strongest intensities corresponded to Fe 5 C 2 (510) facets, which were also revealed by HR-TEM images with lattice spacings of 0.20 nm (Supplementary Fig. 14 ) 51 . Furthermore, the coexistence of Fe 5 C 2 (A) and Fe 5 C 2 (B) species was determined from two overlapping sextuplets in the 57 Fe Mössbauer spectra (Fig. 2c and Supplementary Table 8 ), which represented the different occupied sites of Fe in the crystallographic structure of Fe 5 C 2 52 . The aforementioned findings demonstrated that iron carbide, an active phase for the chain growth reaction, was mainly presented as the Fe 5 C 2 (510) phase in the spent catalysts 21 . Co K-edge XANES tests were also applied to identify the Co phases in the spent catalysts. In contrast to the as-prepared catalysts, the majority of cobalt existed in metallic states (Fig. 2d ), demonstrating the good reduction abilities of Co in the graphene-supported catalysts. Moreover, the EXAFS fitting results illustrated the presence of only Co–Co bonds, providing further evidence that cobalt existed in the metallic phases (Supplementary Fig. 15 and Supplementary Table 9 ).
Catalytic performances and stabilities
The catalysts were tested under the conditions of 320 °C, 3.0 MPa, and W/F = 4.5 g h mol –1 . The main products of both the GO-Fe/K and GO/K-Fe catalysts, without the addition of Co, were methane and light olefins instead of saturated light paraffins (Fig. 3a ). However, as shown in Fig. 3b , with the Co incorporation, the Fe-Co bimetallic catalysts displayed various types of hydrocarbon product selectivities. GO-Co/K-Fe and GO/K-Fe-Co mainly produced light olefins, especially for the GO/K-Fe-Co catalyst, where 50.1% C 2 = –C 4 = selectivity was obtained at a CO 2 conversion of 55.4%. Whereas for GO-Fe-Co/K, more alkanes, particularly methane and ethane, were produced (Fig. 3b ). However, GO-Fe/K-Co exhibited a higher selectivity of propane and butane (43.6%) compared to the GO-Fe-Co/K (Fig. 3b ). In comparison with GO-Fe/K, the CoFe sites of GO-Fe/K-Co were formed by impregnating Co onto the surface Fe sites of GO-Fe/K with the assistance of graphene fences. GO/K-10Fe-20Co was also employed to simulate the external CoFe sites of GO-Fe/K-Co which had similar surface Fe/Co ratios (0.6 and 0.7) (Supplementary Table 3 ). The high methane selectivity confirmed that the external CoFe sites with a high Co content had a strong hydrogenation capacity (Fig. 3a ), which facilitated the original light olefins produced on the internal Fe sites hydrogenating into saturated alkanes, thus transforming the products from light olefins to LPG (Fig. 3g ). In this process, the reaction equilibrium was shifted in the positive direction, leading to an increase in the CO 2 conversion from 33.2% (GO-Fe/K) to 46.0% (GO-Fe/K-Co). This can also be observed in the detailed product distributions of GO-Fe/K and GO-Fe/K-Co (Supplementary Fig. 16 ). Before the Co addition, the product selectivity of C2 and C3 in GO-Fe/K was roughly the same. However, after the introduction of Co, C3 products occupied the highest selectivity, which further indicated that the diffusion and hydrogenation effects on the internal-active-site products shifted the chemical equilibrium in a positive direction.
In contrast to the GO-Fe/K-Co, the products of the rGO-Fe/K-Co were mainly light olefins rather than alkanes (Fig. 3a ). This was because the absence of the spatial dual active sites separated by the graphene fences (Supplementary Fig. 17 ), which is proven by Supplementary Fig. 18 (same distributions of Fe and Co in TEM elemental mapping images), resulted in a higher Fe/Co ratio (Supplementary Table 3 ), thus weakening the hydrogenation ability. Such a result further proved that the unique spatial distributions of Fe-Co dual active sites tuned by graphene fences could efficiently control product types.
By comparing the TEM images (Fig. 1f and Supplementary Figs. 19 – 21 ), we observed that the fresh GO-Fe-Co/K and GO/K-Fe-Co catalysts presented smaller particle sizes compared to the other catalysts, indicating that the simultaneous addition of Fe and Co constrained the aggregation of Fe nanoparticles and enhanced the dispersions of Fe species. CO 2 hydrogenation is a structurally sensitive reaction, and higher dispersions are beneficial for improving the carbonization ability of Fe and thus enhancing the catalytic performance (Fig. 3g ) 53 . As seen, according to the 57 Fe Mössbauer spectra of the spent catalysts (Fig. 2c and Supplementary Table 8 ), GO/K-Fe-Co exhibited the largest proportion of Fe 5 C 2 (72%), including both Fe 5 C 2 (A) and Fe 5 C 2 (B). Accordingly, GO/K-Fe-Co also manifested the highest CO 2 conversion (55.4%) and rather high light olefin selectivity (50.1%) (Fig. 3b ). The GO-Fe-Co/K catalyst, however, displayed the highest temperature of peaks in the H 2 -TPR (peak γ) and CO 2 -TPD profiles (Supplementary Figs. 22 and 23 ). This phenomenon could be explained by the fact that smaller particles would enter or penetrate the interlayers more easily during the hydrothermal process, which was proved by the smaller surface metal loadings, as shown in Supplementary Table 3 . As a result, the metal nanoparticles tightly covered by graphene layers showed strong metal-support interaction (SMSI), which made the reduction and carbonization more challenging 53 . This view was intuitively illustrated by in situ XRD. When the temperature rose to 400 °C, a peak of metallic iron appeared on the GO-Fe-Co/K catalyst. Obviously, this temperature was the highest of all the catalysts. Correspondingly, in the carbonization process, GO-Fe-Co/K also showed the lowest peak strength of Fe 5 C 2 (Supplementary Fig. 24 ). Meanwhile, previous research has demonstrated that strong chemical adsorption of CO 2 is not easily triggered for hydrogenation, and the coating of the surface Fe species formed by the strong chemical adsorption of CO 2 may potentially lower the catalytic performance 38 . Consequently, these reasons led to a slightly lower Fe 5 C 2 content (44.6%) and CO 2 conversion of GO-Fe-Co/K (42.1%) than those of GO/K-Fe-Co (72% and 55.4%) (Fig. 3b and Supplementary Table 8 ).
In contrast to GO-Fe-Co/K, the GO-Fe/K-Co catalyst displayed appropriate metal-support interaction (MSI) and CO 2 adsorption strengths (Supplementary Fig. 22 and Supplementary Fig. 23 ), leading to a CO 2 conversion of up to 46% (Fig. 3b ). Given that cobalt was inactive in the RWGS reaction 39 , 54 , the Co surrounding the Fe 5 C 2 on the surface layers would further consume CO without producing it 55 . Accordingly, GO/K-10Fe-20Co, which was used to simulate the external Fe/Co sites, also exhibited a low CO selectivity (2.7%). It was significantly lowered compared to that of the GO-Fe/K (15.6%), which consisted of Fe active sites (Fig. 3a ). This finding further suggested that the CoFe active sites with a low Fe/Co ratio (0.82) (Supplementary Table 4 ) located on the external surface of GO-Fe/K-Co could consume the CO produced by the internal Fe sites, thus keeping the CO selectivity at a low level. As a result, GO-Fe/K-Co showed an ultra-low CO selectivity (2.2%) among all the catalysts (Fig. 3b ). To the best of our knowledge, this is the lowest value level reported for the current methanol intermediate route as well as a modified FTS pathway (refer to Fe-containing catalysts). Furthermore, thick carbon deposition around metal particles would reduce the catalytic activity 56 , 57 and this phenomenon can be observed in TEM images of the spent catalysts (Supplementary Fig. 14 ). Clearly, as determined by TEM mapping, the ratio of Fe/Co in the spent GO-Co/K-Fe was much lower than that of other catalysts and the fresh GO-Co/K-Fe (Supplementary Fig. 25 , Supplementary Table 10 and Supplementary Table 11 ), which can be explained by the large amount of amorphous carbon deposited on the surface of Fe sites affecting the determination of element contents. As a result, it exhibited a relatively low CO 2 conversion (32.8%) (Fig. 3b ).
To further investigate the influence of the Fe amount, the total Fe content in the GO-nFe/K-Co catalysts was altered from 20 wt% to 30 wt%. Intuitively, as Fe content rose, the selectivity of LPG (C 3 P – C 4 P ) declined, while the selectivities of light olefins (C 2 = – C 4 = ) and C 5 + products increased (Fig. 3c ), revealing that the extra Fe introduced to the interior and external graphene fences resulted in an enhancement of the carbon chain growth capacity and an inhibition of the olefin secondary hydrogenation, respectively. In addition, to compare the performances of GO-Fe/K-Co with other carbon materials (CNTs, AC, and CB) supported catalysts, the same preparation procedures and addition amounts of GO-Fe/K-Co were performed. The LPG selectivities of AC-Fe/K-Co (15.8%) and CB-Fe/K-Co (3.9%) were significantly lower than those of GO-Fe/K-Co (43.6%) due to the excessive formation of by-product methane and ethane. Besides, the main products of CNTs-Fe/K-Co were light olefins (35.5%) rather than C 3 -C 4 saturated alkanes (7.3%) (Fig. 3d ). Meanwhile, the metal distributions of these catalysts were determined by TEM elemental mapping images (Supplementary Fig. 26 ). Co distributions were found in all the Fe distribution areas in these three catalysts, indicating that these three carbon materials lacked the function of separating metals as graphene. These findings further demonstrated the superior performance of graphene-fence-separated dual active sites in regulating carbon chain growth and olefin secondary hydrogenation.
We further investigated the stabilities of GO-Fe/K-Co and GO/K-Fe-Co under 3.0 MPa with a W/F of 4.5 g h mol –1 at 320 °C. For GO/K-Fe-Co, the CO 2 conversion decreased and the selectivity of CO as a byproduct increased continuously within 40 h on stream. By contrast, GO-Fe/K-Co remained stable during the 100-hour stability test (Fig. 3e, f ). Interestingly, as depicted in TEM images (Supplementary Fig. 19a ), the metal nanoparticles of the spent GO/K-Fe-Co catalyst aggregated dramatically in comparison to the fresh catalyst with the prolongation of the reaction process. Contrarily, the metal particles of the spent GO-Fe/K-Co did not agglomerate, and the particle sizes maintained a stable range throughout the reaction (Supplementary Fig. 19a ) due to the protection of the graphene fences. Previous studies have demonstrated that the graphene loaded with iron nanoparticles by the hydrothermal method has a role in anchoring iron particles 41 . Combined with our results described above, the graphene fences that separated the dual active sites in the GO-Fe/K-Co catalyst could also spatially confine the aggregation of metal particles during the reaction, thus preventing the deactivation and maintaining a high activity (Supplementary Fig. 19b ). Besides, TEM mapping images of the spent GO-Fe/K-Co catalysts were also tested to explore the distributions of Fe and Co after the reaction, and enlarged images are shown in Supplementary Fig. 27 . The red circles represent the area with different distributions of Fe and Co (obvious Fe distributions but few Co distributions). As observed, after the reaction process, graphene fences still maintain the effect of separating Fe and Co active sites.
Mechanistic studies
According to the local micro-environments of supported metal catalysts reported in the previous work 31 , in this study, we employed density functional theory (DFT) calculations to explore the influence of the Fe-Co dual sites separated by graphene fences. To precisely identify the micro-environment of each active site, we first constructed a Fe 5 C 2 (510) surface model, which was denoted as Model 1. Afterwards, Model 2 was built by adding a Co 10 cluster to Model 1 in consideration of surface cobalt incorporation of GO-Fe/K-Co (Fig. 4b ). Thereinto, Model 1 corresponded to the Fe 5 C 2 active sites inside the graphene fences of GO-Fe/K-Co, whereas Model 2 represented the Fe 5 C 2 /Co active sites on the surface of graphene fences (Fig. 4b ). In order to compare the adsorption capacities of Co carbide and metallic Co, Co 10 in Model 2 was substituted by Co 8 C 4 , and it was dubbed “Model 3” (Fig. 4b ). According to the adsorption energy results, the adsorption of both the H 2 and light olefins on the surface cobalt sites in Model 2 and Model 3 was more stable than that on the interfacial sites, illustrating that H 2 and light olefins were primarily adsorbed on the surface cobalt sites of Fe 5 C 2 /Co due to the electron transfer between cobalt and iron carbide (Fig. 4c, d , and Supplementary Fig. 28 ). The electron transfer between iron carbide and cobalt on Model 2 could be intuitively observed by the charge density difference analysis, which was shown in Supplementary Fig. 29 . The red and green colors represented electron accumulation and loss, respectively. Obviously, after being loaded onto Fe 5 C 2 , electrons were transferred from cobalt to Fe 5 C 2 . Since electron-deficient metals are more favorable to adsorb hydrogen 58 , it was the electron transfer that enhanced hydrogen adsorption on the cobalt site, while hydrogen was not likely to be adsorbed on the interface of Fe 5 C 2 -Co due to the electron accumulation.
Regarding the hydrogen adsorption, the hydrogen adsorption over Model 2, which adsorbed at the surface metallic Co sites, had a lower adsorption energy (–0.84 eV) than that over iron carbide in Model 1 (–0.81 eV), revealing that the Fe 5 C 2 /Co sites on the external graphene fences had a stronger H 2 adsorption effect than that of the Fe 5 C 2 sites inside the graphene fences (Fig. 4c ). Accordingly, the H 2 temperature-programmed desorption profile (H 2 -TPD) of GO-Fe/K-Co exhibited two distinct peaks (I and II) (Fig. 4e ), which corresponded to the weak chemical adsorption and strong chemical adsorption of hydrogen, respectively. rGO-Fe/K-Co displayed a desorption peak near 600 °C resembling that of GO-Fe/K-Co. However, unlike GO-Fe/K-Co, the strong chemical adsorption peak around 650 °C was not obvious (Supplementary Fig. 30 ), and this was because partial reduction of rGO-Fe/K-Co had already been accomplished when Fe was introduced, making graphene fences ineffective at dividing the dual active sites. Moreover, the GO-Fe/K catalyst also displayed a single weak hydrogen chemisorption peak before Co was added (Supplementary Fig. 30 ). This evidence strongly proved that the different hydrogen adsorption capacities between the internal Fe 5 C 2 sites and the surface Fe 5 C 2 /Co sites after the Co addition were the main reasons for the formation of the various hydrogen desorption peaks of GO-Fe/K-Co in H 2 -TPD. Wherein Fe 5 C 2 corresponded to the weak hydrogen chemisorption peak (peak I), while Fe 5 C 2 /Co corresponded to the strong hydrogen chemisorption peak (peak II) (Fig. 4b, c, and e ).
Meanwhile, GO-Co/K-Fe presented one peak in the H 2 -TPD profile as well, which could be explained by the fact that a large quantity of Fe was loaded on the surface of the metallic Co, thus inhibiting the hydrogen adsorption on the Co sites. Furthermore, Fe and Co were distributed evenly and compactly on the graphene fences in GO-Fe-Co/K and GO/K-Fe-Co due to their simultaneous addition, hence they also presented a single chemisorption peak (Fig. 4e ). Among them, GO-Fe-Co/K exhibited the highest Co valence states in the XANES results (Supplementary Fig. 12 ), indicating that Co lost the most electrons in GO-Fe-Co/K, while GO-Fe/K-Co, whose Co was supported on the surface Fe, showed the lowest valence states (Supplementary Fig. 12 ), which can be interpreted as the strong metal-support interaction of GO-Fe-Co/K enhancing the electron transfers between Co and graphene (Supplementary Fig. 22 ). Past studies have revealed that electron-deficient metals are more likely to absorb hydrogen 58 . Therefore, GO-Fe-Co/K displayed the strongest hydrogen adsorption peak. On the contrary, GO/K-Fe-Co, which was loaded with Fe and Co by impregnation, showed a lower-strength hydrogen adsorption peak (Fig. 4e ) due to the weaker metal-support interaction (Supplementary Fig. 22 ) and fewer electron transfers between metals and graphene (Supplementary Fig. 12 ). Besides, the H 2 -TPD test performed on graphene oxide (GO) was applied to exclude the effect of carbon material decomposition. No obvious peaks were observed in the profile, indicating that GO remained stable in the helium atmosphere below 800 °C (Supplementary Fig. 31 ).
These hydrogen adsorption characteristics (Fig. 4e ) were consistent with the catalytic performances (Fig. 3b ): weakly chemisorbed hydrogen was easier to be activated and therefore more inclined to hydrogenate CO 2 to extend the carbon chains than olefin secondary hydrogenation, thus generating more olefins. Consequently, the primary products of the GO/K-Fe-Co and the GO-Co/K-Fe were light olefins (Figs. 3b and 4e ). Among them, GO/K-Fe-Co exhibited a higher C 2 = – C 4 = selectivity (50.1%) (Fig. 3b ) because of its lower adsorption peak position (Fig. 4e ). On the contrary, strongly chemisorbed H 2 , which was not activated, tended to hydrogenate olefins to manufacture alkanes 58 – 60 , so GO-Fe-Co/K produced more paraffins, especially methane and ethane (Fig. 3b ). However, unlike other catalysts, the GO-Fe/K-Co, which presented double peaks in the H 2 -TPD profile (Fig. 4e ), demonstrated that both the carbon chain growth ability and the olefin secondary hydrogenation ability were attained as mentioned above. As a result, unlike GO-Fe-Co/K, whose main products were methane and ethane, the products in GO-Fe/K-Co were extended and concentrated in propane and butane (43.6%) (Fig. 3b ).
In order to clearly verify the differences in the difficulties of the olefin hydrogenation reaction over the spatial dual active sites, on the basis of the structures of Model 1 and Model 2 constructed above, DFT calculations for potential reaction pathways and intermediates of propylene hydrogenation to propane were conducted and summarized (Fig. 4f , Supplementary Figs. 32 and 33 ) 61 . Fe 5 C 2 sites needed to overcome an energy barrier of 0.78 eV to convert the *C 3 H 6 intermediate into *C 3 H 7 . However, for the Fe 5 C 2 /Co site, the rate-determining step of the whole process was changed to the step of *C 3 H 7 to *C 3 H 8 . The lower rate-determining energy barrier (0.43 eV) of the Fe 5 C 2 /Co site indicated a higher propylene hydrogenation activity compared to the Fe 5 C 2 site (Fig. 4f ) 62 . In addition, C 3 H 6 -pulse transient hydrogenation experiments performed on the spent GO-Fe/K-Co, GO/K-Fe-Co, and GO/K-10Fe-20Co catalysts were applied to realistically examine the propylene secondary hydrogenation capacities on different active sites. As calculated by DFT, with the same amount of C 3 H 6 being pulsed into the reactor, GO-Fe/K-Co exhibited a higher C 3 H 8 signal compared to GO/K-Fe-Co, which demonstrated a stronger propylene secondary hydrogenation capacity. Furthermore, GO/K-10Fe-20Co showed the highest C 3 H 8 signal, and this result further revealed that the external Fe/Co sites were the main active sites for the propylene hydrogenation reaction to obtain propane (Fig. 4g ). We didn’t find pulse peaks attributed to the methane signal in the three spent catalysts (Supplementary Fig. 34 ), illustrating that no hydrocracking reaction occurred.
Consequently, a reaction path for selective CO 2 hydrogenation over the GO-Fe/K-Co catalyst was proposed in Fig. 4a by analyzing the reaction results and the mechanism characterizations. Initially, CO 2 was converted into CO via the RWGS reaction at the internal Fe active sites, followed by the FTS process to produce light olefins. The diffusion effect could easily transport light olefins to the CoFe active sites on the external surface of the graphene fences, where the CoFe sites had a relatively high cobalt content. Due to their higher hydrogen adsorption capacities, the olefins were hydrogenated to the light alkanes, producing the high selectivity of LPG. Notably, the adsorption strengths of the propylene and 1-butene adsorbed on the metallic Co surface in Model 2 were stronger than those absorbed on Co 2 C in Model 3 (Fig. 4d ), verifying a higher olefin hydrogenation efficiency of the metallic Co over Co 2 C in this unique CO 2 hydrogenation reaction system. Meanwhile, due to the inactivity of Co in the RWGS reaction, cobalt loaded on the outer surface would consume a large amount of CO produced by the internal Fe active sites, resulting in an ultra-low CO selectivity (2.2%). However, the high H 2 concentration resulting from the lack of RWGS reaction compelled CO 2 hydrogenation directly to the by-products methane and ethane at the Fe 5 C 2 /Co sites on the outer surface of the graphene fences, as demonstrated by the catalytic performance of GO/K-10Fe-20Co (Fig. 3a ) 39 .
In order to clearly explore the synergistic effect of the dual active sites in the reaction process, we performed DFT calculations for the chain growth and the olefin hydrogenation reactions from ethylene to butane over the Fe 5 C 2 and Fe 5 C 2 -Co sites (Fig. 5 , Supplementary Figs. 35 – 37 ). At the Fe 5 C 2 site, alkenes are more likely to undergo C–C coupling reactions to achieve carbon chain growth than secondary hydrogenation reactions due to their lower free energy barriers. Therefore, more long-chain alkenes would be obtained. For the Fe 5 C 2 -Co site, the hydrogenation of alkenes to alkanes is easier than the chain growth reaction, so there would be more ethane than propane and butane produced. However, once the propylene and butene products diffused from the Fe 5 C 2 sites to the Fe 5 C 2 -Co sites, due to the low energy barriers of the propylene and butene hydrogenation reactions (0.52 and 0.65 eV), propylene and butene would be easily hydrogenated to propane and butane, resulting in a high selectivity of LPG products.
These calculation results also revealed the difficulty of producing LPG from CO 2 hydrogenation via a Fischer-Tropsch pathway, that is, the contradiction between the carbon chain growth and the olefin secondary hydrogenation. For the active sites with weak hydrogen adsorption capacity, such as Fe 5 C 2 , it is difficult for alkenes to be hydrogenated to alkanes, leading to low alkane selectivity. Whereas for the sites with strong hydrogen adsorption capacity, such as Fe 5 C 2 -Co, on the other hand, it is also difficult to achieve carbon growth, and excessive methane and ethane products reduce the LPG selectivity (Figs. 4c and 5 ). In this situation, the proposed graphene-fence-separated dual active sites could simultaneously meet the demands of carbon chain growth and olefin hydrogenation, thus overcoming this difficulty.
As summarized above, as a catalyst for producing olefins, GO/K-Fe-Co had a surface with a significant number of iron carbide active sites combined with a small number of metallic cobalt sites. The intimate contact (Fig. 1f ) and electron transfers between Fe and Co make their hydrogen adsorption capacity tend to be uniform. Meanwhile, due to the lack of separation effects, the higher surface Fe/Co value (Supplementary Table 3 ) also reduced their hydrogen adsorption capacity 63 – 65 . Obviously, GO/K-Fe-Co did not exhibit a strong hydrogen chemisorption peak (Fig. 4e ) in the condition of such a high Fe/Co ratio on the surface (Supplementary Table 3 ). Furthermore, as observed in the TEM images (Fig. 1 f), K was uniformly distributed on the nanoparticles of Fe and Co due to the well dispersion. Potassium has been proven to restrain H 2 chemisorption and increase olefin selectivity 66 – 68 . Thus, without the assistance of graphene fences, the formed light olefins were easily diffused into the gas flow and carried out due to the weak olefin secondary hydrogenation capacities (Fig. 4e ), and exhibited a higher light olefin selectivity (50.1%) (Fig. 3b ). | Discussion
In summary, graphene-fence-regulated Fe-Co bimetallic catalysts with homogeneous active sites or scattered spatial dual active sites were successfully prepared and employed in the CO 2 hydrogenation reaction for the selective production of light olefins or LPG without any post-treatments. The GO/K-Fe-Co catalyst, with its uniform distribution and smaller particle sizes, reached a C 2 = – C 4 = selectivity as high as 50.1% at a CO 2 conversion of 55.4%. While the graphene-fence -separated GO-Fe/K-Co catalyst displayed a 43.6% selectivity for LPG (propane and butane) and an ultra-low CO selectivity of 2.2% at a 46% CO 2 conversion without the help of any zeolites. This reaction result corresponded to the highest STY (151.0 g kg cat −1 h −1 ) of LPG ever reported. Characterization and theoretical calculation results demonstrated that the dual active sites (iron carbides and metallic cobalt) performed their tailor-made respective functions, simultaneously satisfying the requirements of suitable carbon chain growth and olefin secondary hydrogenation, and selectively improving the LPG selectivity. Furthermore, the graphene fences prevented the metal particles from agglomerating, thus enhancing the catalytic stability. We expect that this sophisticated method of exploiting the specific structure of graphene to fabricate catalysts with multiple active sites will inspire other important catalytic reactions. Finally, this work not only provides a multiple-active-site catalyst with a unique spatial distribution for selective CO 2 hydrogenation but also provides a fundamental understanding of the role of graphene fences in selective hydrogenation. It can be broadened to other supported catalysts and offers valuable guidance for the rational design of powerful reaction environments through engineering the spatial distributions of different active sites. | Tuning CO 2 hydrogenation product distribution to obtain high-selectivity target products is of great significance. However, due to the imprecise regulation of chain propagation and hydrogenation reactions, the oriented synthesis of a single product is challenging. Herein, we report an approach to controlling multiple sites with graphene fence engineering that enables direct conversion of CO 2 /H 2 mixtures into different types of hydrocarbons. Fe-Co active sites on the graphene fence surface present 50.1% light olefin selectivity, while the spatial Fe-Co nanoparticles separated by graphene fences achieve liquefied petroleum gas of 43.6%. With the assistance of graphene fences, iron carbides and metallic cobalt can efficiently regulate C-C coupling and olefin secondary hydrogenation reactions to achieve product-selective switching between light olefins and liquefied petroleum gas. Furthermore, it also creates a precedent for CO 2 direct hydrogenation to liquefied petroleum gas via a Fischer-Tropsch pathway with the highest space-time yields compared to other reported composite catalysts.
Product-selective switching of CO 2 hydrogenation is a huge challenge. Here, the authors report an approach to manipulating Fe-Co sites with three-dimensional graphene fences that achieves integrated synthesis of different products.
Subject terms | Supplementary information
Source data
| Supplementary information
The online version contains supplementary material available at 10.1038/s41467-024-44763-9.
Acknowledgements
This work was supported by National Natural Science Foundation of China (22102001) (L.G.), NEDO (New energy and industrial Technology Development Organization) (JPNP16002) (N.T.), JST SPRING (JPMJSP2145) of Japan (J.M.L.), Liaoning province unveils science and technology project (2021JH1/10400101) (B.L.), and the Grant-in-Aid from Japan Society for the Promotion of Science (JSPS) (22H01864, 23H05404) (N.T.). We thank Bowei Meng and Hengyang Liu for synthesizing the catalysts and doing characterization during the revision process.
Author contributions
J.M.L. and J.L. completed the catalyst tests and analyzed the data. L.G. wrote the paper with input from all the authors. W.W. and C.W. synthesized the catalysts. W.G. and X.G. did the XRD analysis. Y.H. did the catalyst morphology characterizations. G.Y. and S.Y. analyzed the characterization results. B.L. and N.T. revised the paper. All the authors contributed to the discussions on the results.
Peer review
Peer review information
Nature Communications thanks Mingyue Ding, Tiancun Xiao and the other, anonymous, reviewer for their contribution to the peer review of this work. A peer review file is available.
Data availability
The source data generated in this study are provided in the Source Data file. Source data are provided with this paper.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:57 | Nat Commun. 2024 Jan 13; 15:512 | oa_package/39/28/PMC10787759.tar.gz |
|
PMC10787760 | 38218848 | Introduction
Host resistance is the most effective way to reduce yield loss caused by crop diseases. Unfortunately, deployed resistance (R) genes frequently become ineffective due to genetic changes in the pathogens. Breeding for resistance is costly and introduction of resistance genes may reduce the rate of yield increase 1 . Thus, breeding varieties with durable resistance without compromising yield potential is a major challenge in crop improvement.
Plants and pathogens have formed various interactions during the long-term coevolution. Many host plants exhibit resistance during the entire life cycle often referred to as seedling or all-stage resistance (ASR) 2 . ASR genes typically encode nucleotide-binding leucine-rich repeat (NLR) receptors that recognize specific pathogen avirulence (Avr) proteins, providing race-specific resistance. Thus, such genes are generally nondurable when deployed singlely 3 . Interestingly, paired NLRs, such as PigmS/PigmR, have been shown to confer durable resistance against rice blast 4 . In other cases, resistance is restricted to particular developmental stages, the best known of which is adult plant resistance (APR) 5 . Some APR genes encode non-NLR receptors and confer race non-specific and durable resistance 6 . The examples include Yr36 which is a cytoplasmic protein kinase gene 7 , Lr34 is an ABC transporter gene and Lr67 is a hexose transporter gene 8 , 9 . Nevertheless, certain well-documented APR genes of wheat, such as Lr12 , Lr37 and Yr49 , still confer race-specific resistance 10 . Notably, rice adult-plant race-specific genes Xa3 and Xa4 for resistance to bacterial blight demonstrated durability 11 . Additionally, host plants may also express tissue-specific resistance 12 , 13 . For instance, some wheat genotypes with either ASR or APR to rust may develop susceptible symptoms on spikes. Consequently, despite numerous NLR genes and atypical resistance genes, such as receptor-like kinases (RLKs), tandem kinase proteins (TKPs), like receptor-like kinases (WAKs) and transcription factors (TFs), have been isolated from plants in the past decades, our understanding of the molecular mechanisms underlying divergent resistance among host plants remains limited.
NLR encoding locus that contains a single R gene with one or more alleles encoding different resistance specificities is common in plant kingdom. For examples, wheat powdery mildew (Pm) resistance gene Pm2 includes eight functional alleles 14 , and Pm3 includes 17 functional alleles identified to date 15 . Recent studies have expanded our understanding of the avirulence (AVR) effectors from the pathogen recognized by allelic series, which were sequence unrelated proteins 16 , 17 . Thus, as an alternative to the pyramiding of different R genes, different allelic variants can also be combined for achieving more durable and broad-spectrum resistance 18 . However, in some cases, pyramiding different R genes or alleles can cause mutual suppression of their functions. A few reports have described the genetic basis of disease resistance suppression in wheat with the examples including suppression of stem rust resistance by a subunit of the mediator complex gene 19 , and direct interactions between homologous NLR receptors encoded by alleles of Pm3 and Pm8 to suppress powdery mildew resistance 20 , 21 . In contrast, our knowledge regarding the suppression of unrelated NLR immune receptors is limited.
Powdery mildew, caused by Blumeria graminis f. sp. tritici ( Bgt ), is a common disease that limits wheat production in temperate regions. To enhance host resistance, numerous disease resistance genes have been transferred into bread wheat from its wild relatives. Remarkably, five powdery mildew resistance genes, Pm21 , Pm55 , Pm62 , Pm67 and Pm5V have been introgressed from different Dasypyrum villosum (L.) (2 n = 2 x = 14, VV) accessions into wheat through Robertsonian translocations (RobTs) 22 – 24 . Among these, wheat- D. villosum T5DL·5 V#4 S and T5AL·5 V#4 S translocation lines, carrying the gene Pm55 from D. villosum accession 91C43, showed developmental-stage and tissue-specific resistance to wheat powdery mildew 12 . However, the T5DL·5 V#5 S translocation line, carrying the gene Pm5V from D. villosum accession 01I140, exhibited broad effectiveness against Bgt races at all stages 24 . Therefore, the different 5VS-introgression lines present an opportunity to isolate and pyramid genes that confer distinct resistance in wheat. Additionally, previous research has shown that the transfer of Pm55 and Pm5V into diverse wheat genetic backgrounds did not negatively affect yield-related traits 24 , 25 . Consequently, these genes have been utilized in the wheat breeding program in China to develop elite resistance lines 26 .
In this work, we aim to elucidate the genetic and molecular basis underlying the distinct types of resistance against powdery mildew as observed in different 5VS-translocation lines. Our findings reveal that Pm55 and Pm5V are allelic on chromosome arm 5VS; however, they have divergent interactions with a linked inhibitor gene, SuPm55 , which causes the distinct resistance of different 5VS-translocation lines. Notably, Pm55 and SuPm55 are unrelated NLR proteins and knockout of SuPm55 significantly reduces the plant fitness. Accordingly, our results reveal the complex interactions of different NLR immune receptors in disease resistance, and provide insights into the suppression of resistance in wheat. Combining the T5AL·5 V#4 S and T5DL·5 V#5 S translocations offers a promising strategy for breeding durable resistance to wheat powdery mildew. | Methods
Plant materials and field trials
Dasypyrum villosum accessions and two panels of wheat- D. villosum introgression lines listed in Supplementary Table 1 are maintained at the Cytogenetics Institute, Nanjing Agricultural University (CINAU). The powdery mildew susceptible wheat cv. Fielder was used for transgenic experiments. Yield trials of NAU0686 and NAU2021 were performed in randomized plot design with five replications. Each plot consisted of seven rows, measuring 3.5 m in length and 2.0 m in width. Fifty plants located in the middle of the internal rows of each materials were randomly selected for the analysis of yield-related traits, including plant height, thousand-kernel weight, spikes per plant, seeds spike and grain yield per plot.
Bgt infection, staining and microscopy
Wheat Bgt isolates E09, E26 and E31 were provided by Prof. Yilin Zhou, Institute of Plant Protection, Chinese Academy of Agricultural Science, Beijing. Isolate E09 is virulent on Pm1a , 3a , 3b , 3c , 3 f , 3e , 5a , 6 , 7 , 8 , 19 , 25 , 34 and 35 . Isolate E26 is virulent on Pm1a , 3a , 3c , 3d , 3 f , 3e , 5a , 6 , 7 , 8 , 19 and 25 , whereas, E31 is virulent on Pm1a , 2a , 3a , 3b , 3c , 3d , 3 f , 3e , 4a , 4b , 5a , 6 , 7 , 8 , 19 , 24 , 25 , 30 and 34 . Additionally, the other 18 Bgt isolates collected in China were maintained at the Hubei Academy of Agricultural Science, at Wuhan. Powdery mildew responses on wheat seedlings were carried out in a greenhouse at 20–22 °C under a 12 h light/12 h darkness photoperiod. The Bgt isolates were maintained and increased on seedlings of wheat cv. Chancellor. Wheat lines were inoculated at the two-leaf stage and infection types (IT) were recorded on a 0–4 infection type (IT) scale at 7 days post inoculation when susceptible NAU0686 control plants were heavily diseased 13 . Adult-plant tests on lines grown in a field nursery were performed using the Bgt isolates E09. Susceptible cv. NAU0686 planted on both sides of each test row served as inoculum spreader and control. Reactions on leaves and sheaths were recorded at development stages of stem elongation, and heading on a 0–9 response scale 12 .
To evaluate the differences in mycelial development, leaf segments were prepared using an endogenous peroxidase-dependent in situ histochemical staining procedure 13 . In order to detect leaf cell death, primary leaves from TF5V-1and NAU1908 plants were incubated by 0.5% (w/v) trypan blue staining at 7 days after Bgt isolate E09 inoculation (dpi) 38 . To visualize hydrogen peroxide (H 2 O 2 ) accumulation, seedling leaves were stained at 2 dpi with 3,3’-diaminobenzidine (DAB) solution (1 mg/mL, pH 5.8) for 12 h, and then bleached in absolute ethanol 39 . The treated leaves were observed under an Olympus BX60 microscope (Olympus, Tokyo, Japan).
Genome assembly of the PM55 and SuPM55 loci
To identify candidate genes in the genetic interval, we isolated and sequenced the translocated chromosome T5DL·5 V#4 S from line TF5V-1. Briefly, liquid suspensions of intact mitotic chromosomes were prepared from line TF5V-1. The chromosomes in suspension were fluorescently labeled by FISHIS using oligonucleotide 5’-FITC-GAA7-FITC-3’ (Integrated DNA Technologies, Inc., Iowa, USA) and counterstained by DAPI (4 ́,6-diamidino 2-phenylindole). Next, bivariate flow karyotyping and chromosome sorting was done on a FACSAria II SORP flow cytometer and sorter (Becton Dickinson Immunocytometry Systems, San José, USA) 40 , 41 . The chromosome content of the flow-sorted fractions was determined by FISH on ~2000 chromosomes flow-sorted onto microscope slides using probes for pSc119.2, pTa71 and Afa family repetitive DNA sequences. Subsequently, DNA from flow-sorted chromosomes was purified, and the sheared DNA was used to prepare sequencing libraries using NEBNext UltraTM II DNA Library Prep Kit for Illumina (New England Biolabs, Ipswich, USA). The libraries were sequenced on an Illumina NovaSeq 6000 to produce 2 × 250 bp paired-end reads. The size of the T5DL·5 V#4 S de novo assembly was 496.7 Mb with an average scaffold N50 size of 18.3 Kb. The total number of scaffolds was 57,258.
Moreover, a bacterial artificial chromosome (BAC) library was constructed using genomic DNA from D. villosum 01I140 (V#5). The nuclei of D. villosum accession 01I140 were isolated from approximately 30 g of etiolated young leaf tissue. High-molecular-weight (HMW) DNA was released from nuclei by proteinase K in lysis buffer (0.1 mg/mL proteinase K dissolved in 0.5 M EDTA, pH 9.1) at 50 °C for 48 h. The lysis buffer was changed after 24 h. Plugs (usually containing 5 to 6 μg undigested HMW DNA) were partially digested with BamH I or Hind III 42 . The BAC library comprised 351,520 clones and represented ~6.0-fold coverage of the D. villosum genome ( ~ 4.05 Gb). BAC clones were extracted from each of primary pools using a Qiagen Large-construct Kit (Germany). For each selected BAC clone, at least 300 x illumina paired-end short reads and 50x PacBio continuous long reads (CLR) were generated from the Illumina NovaSeq platform and PacBio Sequel platform, respectively. Library preparation and sequencing were performed at Novogene Co., Ltd (Beijing, China). Complete sequences for BAC clones were finally assembled from the short- and long- reads using an assembly workflow 43 . Finally, putative genes were annotated with the TriAnnot pipeline ( https://urgi.versailles.inra.fr/triannot/?pipeline ).
Primer design and PCR amplification
DNA sequences from 5VS genome were used as templates for the development of molecular markers (Supplementary Table 10 ). All primers were designed using the Primer blast tool ( Triticeae Multi-omics Center/ViroBlast Home Page and DNAMAN V6). PCR amplifications were performed in 10 μL reaction mixture containing 5 μL 2 × Phanta® Master Mix (Vazyme, Nanjing, China), 0.2 μL of each primer, and 1 μL of genomic DNA. DNA amplification was performed at 95 °C for 3 min, followed by 33–35 cycles at 95 °C for 10 s, 55–60 °C (depending on the annealing temperature of primer pairs) for 30 s, and 72 °C for 30 s/Kb, with a final extension at 72 °C for 5 min. PCR products were separated in 1% agarose gels.
Genomic DNA and RNA isolation and transcript analysis
Genomic DNA for molecular detection and gene cloning was extracted from seedling or adult-plant leaves of mutants, transgenic plants and parents by CTAB method 44 . Total RNA from the Bgt -inoculated leaves were extracted at 72 hpi for full-length transcript sequencing and cDNA analysis. Reverse transcription was performed using a HiScript III 1st Strand cDNA Synthesis Kit ( + gDNA wiper) (Vazyme, Nanjing, China). A cDNA library was constructed using SMARTer PCR cDNA Synthesis Kit (Tiangen, Nanjing, China), and sequenced using an Illumina platform at Beijing Biomark Biotechnology Co. Ltd. Raw data of 64.38 GB and 84.75 GB were obtained for NAU1908 and TF5V-1, respectively. To obtain a cyclic consensus sequence (CCS) the raw data were processed using pbccs 6.4.0 software provided by the PacBio Sequel platform. Isoseq3 3.8.2 software was used to remove chimeras and polyA to get full-length transcripts.
Gene expression analysis by qRT-PCR
Total RNA was isolated using TriPure Isolation Reagent (Roche, Mannheim, Germany). First-strand cDNA was synthesized using 2 μg of total RNA by HiScript III reverse transcriptase (Vazyme Biotech, Nanjing, China) following the manufacturer’s instructions. Quantitative reverse transcription polymerase chain reaction (qRT-PCR) was performed using SYBR Green (Vazyme Biotech, Nanjing, China) with a LightCycler® 480 instrument (Roche, Mannheim, Germany) 45 . The ACTIN gene was used as an internal control, and the 2 (-ΔΔCT) method was used to calculate relative gene expression 46 . Three biological replicates and four technical replicates for each sample were performed.
BSMV-induced gene silencing (BSMV-VIGS)
BSMV-VIGS was used to investigate candidate gene function in NAU1908, TF5V-1 and R5VS-15. A 211 bp fragment in the CNL1 CC domain was inserted into the BSMV vector to form a recombinant virus BSMV: CNL1 was used to silence CNL1 in TF5V-1. Fragments of 229 bp and 197 bp in the CNL2#5 NB-ARC domain were separately inserted into the BSMV vector for silencing in NAU1908, and a 266 bp fragment in the CNL2#4 NB-ARC domain for silencing in R5VS-15. A 265 bp fragment in the Pm2-5V#4 LRR domain was use in silencing R5VS-15. The recombinant virus BSMV: PDS was used as a positive control and BSMV: γ was used as a negative control 47 . When the positive PDS control gene (phytoene dehydrogenase) was silenced and showed photobleaching symptoms, the fourth leaves of inoculated plants were taken for in vitro identification and determination of qRT-PCR silencing efficiency. Isolated leaves were cultured on 6-BA medium for 7 days, and disease development was assessed by hyphal development. At least 10 plants of each genotype were challenged by each BSMV vector, and the experiments were repeated three times.
EMS-induced mutants
Approximately 2000 seeds of NAU1908 and 3000 seeds of TF5V-1 were treated with 1.0% ethyl methanesulfonate (EMS). Treated seeds were then sown in the field to generate M 1 plants, and 678 independent M 1 -generation plants of NAU1908 and 1042 M 1 plants of TF5V-1 were obtained, respectively. To screen susceptible mutants, about 50 seeds of each M 1 plant were evaluated for the response to Bgt isolate E09. Susceptible M 2 plants were further advanced to the M 3 generation and their progeny tested. The full-length genomic sequences of CNL2 alleles were amplified from the susceptible mutants of TF5V-1 and NAU1908.
Transformation CNL2 alleles
The 8391, and 8149 bp genome sequences encompassing the native promoters and terminators of CNL2#5 and CNL2#4 , respectively, were cloned into the LGY-OE3 binary vector. The genomic fragments were inserted into the Hind III –Bam HI restriction endonuclease sites of the digested pLGY-OE3 with an In-Fusion HD Cloning Kit (Clontech Laboratories, Mountain View, CA, USA). The constructs were introduced into the powdery mildew-susceptible bread wheat cv. Fielder by Agrobacterium tumefaciens -mediated transformation 48 , 49 . Both T 0 and T 1 plants were tested for the presence of the transgene by PCR amplification using the CNL2#4 - and CNL2#5 -specific markers, respectively. Wheat cv. Fielder was used as the negative control. The powdery mildew responses of T 0 transgenic plants and sibling controls were examined at the adult-plant stage, and T 1 and T 2 plants were test at the seedling stage as described above 50 .
CRISPR-Cas9 editing of CNL1
For editing CNL1 with CRISPR-Cas9, a sgRNA targeting the CC domain of CNL1 was designed. The Cas9-sgRNA expression vectors were constructed using CRISPRdirect ( http://crispr.dbcls.jp/ ) and introduced into the Agrobacterium strain EHA105. Agrobacterium -mediated transformation was performed on immature embryos of TF5V-1 51 . Positive transgenic seedlings were selected with hygromycin (100 mg/L), and genomic DNA was extracted for PCR detection of the Cas9 gene fragment, 35 S promoter and hygromycin B phosphotransferase gene. PCR markers specific to CNL1 were used to identify the mutants in the CC domain. Homologous mutants were obtained by sequencing in T 1 and T 2 generations.
Yeast two-hybrid (Y2H) assays
The MATCHMAKER GAL4 Two-Hybrid System 3 (Clontech) was used to examine the interaction between proteins. Appropriate amounts of bacterial solution were spread on SD/-Leu-Trp (SD/-L-T) media and incubated inverted at 30 °C for 3 days. The monoclonal antibody was picked up from SD/Leu-Trp-His-Ade (SD/-L-T-H-A) medium and resuspended in water, then diluted to a 1, 10 −1 , 10 −2 , 10 −3 gradient, spotted on SD/-L-T-H-A medium, cultured at 30 °C and observed after one week 52 .
Bimolecular fluorescence complementation (BiFC) assays
The recombinant vectors Pm55-CC-nYFP and SuPm55-CC-cYFP were constructed using the CC domain sequences of Pm55 and SuPm55 , and the Agrobacterium GV3101 (Tiangen, Nanjing) containing the recombinant plasmid was inoculated in 20 mL LB liquid medium 53 . The cells were cultured at 28 °Cuntil the logarithmic phase of Agrobacterium growth, and were collected by centrifugation for 10 min at 5000 × g and room temperature. Agrobacterium was suspended by leaching solution (10 mM MgCl 2 , 10 mM MES, 150 μM AS, pH = 5.6) to OD600 = 0.8–1.0, and then incubated for 3 h. Equal volumes of mixed bacterial suspension was injected into N. benthamiana leaves, and after 48 h, the fluorescence signal of YFP was observed by laser confocal microscope and photographed (Leica SP8, Germany).
Cell death assay in N. benthamiana leaves
Agrobacterium strain GV3101 carrying relevant plasmids was suspended in buffer to OD = 0.5, it was then injected into N. benthamiana leaves, followed by the observation of cell necrosis for 24–72 h. After the appearance of the cell death, the leaves were stained with 0.4% TPN and subsequently decolorized with ethanol: acetic acid (3: 1, V/V) until the background became invisible. Finally, the leaves were photographed for recording 54 .
Luciferase complementation (Luc) imaging assay
To investigate interactions between the CC domains of Pm55 and SuPm55 , a luciferase complementation assay was performed. In this assay, bacterial solution of cLUC- Pm55 CC/nLUC- SuPm55 CC, cLUC/nLUC- SuPm55 CC, cLUC- Pm55 CC/nLUC, and a mixture containing empty vectors cLUC and nLUC were injected into four different regions of the N. benthamiana leaves 55 . The signals were detected 48 h after infiltration, and 10 leaves were analyzed. NightShade LB985 (Berthold Technologies, Germany) for fluorescence detection.
Co-immunoprecipitation assay
Co-immunoprecipitation (Co-IP) experiments were performed following the Pierce HA Tag IP/Co-IP Kit instruction (Thermo). The transient expression of N. benthamiana leaves was followed by Co-immunoprecipitation assay. The fusion protein carrier carrying the expression Flag- or HA- tag is transferred into Agrobacterium strain GV3101, and the Agrobacterium cells containing Flag-Pm55CC and HA-SuPm55CC are mixed and infiltrated into the leaves of N. benthamiana at OD600 = 0.6 for full expression of the protein. Cracking bait-target cells and separation and purification of protein complexes, bait compound boiled and degeneration in SDS, then use 10% SDS-PAGE separation precipitation, with detection of HA or Flag antibodies (1:500, Abcam, Shanghai, China, No. AB 9110, AB 205606).
Cloning and sequencing of Pm55 homologs
The primer pairs used for cloning full-length genomic sequences of Pm55 homologs are listed in Supplementary Table 10 . Amplified fragments were ligated to a pToPo-Blunt (Aidlab, Nanjing, China) for sequencing at TongYong Co. (Nanjing, China). Putative domains of the cloned gene were analyzed using BLAST ( http://www.ncbi.nlm.nih.gov/blast/ ). Protein prediction and multiple sequence alignment analysis were performed by the software of SMART ( http://smart.embl-heidelberg.de/ ) and DNAMAN 7.0 software (Lynnon Biosoft, USA), respectively. R gene protein sequences with an N-terminal coiled-coil domain (CNL class) from the NCBI database were aligned using MUSCLE and a phylogenetic tree was constructed using the UPGMA (unweighted pair group method with arithmetic mean) program in MEGA 6.0 software 56 . Evolutionary distances were determined by the Neighbor-Joining method with Poisson correction, and the units were used to show the number of amino acid substitutions per site.
Statistical analysis
The mean values and standard errors of the treatments were determined by Microsoft Excel. T or ANOVA test was performed with SPSS 26.0 software (SPSS, Inc., Chicago, IL) to determine the significance of differences. Significant differences between two treatments were determined with a probability ( P ) value.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. | Results
High-resolution mapping reveals an inhibitor linked with Pm55
To accurately evaluate the response of the 5VS-translocation lines to powdery mildew, the translocated chromosomes T5AL·5 V#4 S, T5DL·5 V#4 S and T5DL·5 V#5 S were transferred into a highly susceptible cv. NAU0686 genetic background, respectively. Subsequently, stable translocation lines were developed in the BC 5 F 6 progeny (Supplementary Table 1 ). Upon inoculation of these lines with Bgt isolate E09, we observed that both T5AL·5 V#4 S translocation line NAU185 and T5DL·5 V#4 S line TF5V-1, carrying Pm55 displayed susceptible at 3-leaf stage (IT 4), but exhibited high resistance after the 5-leaf stage (IT 1) with susceptible leaf sheaths only at adult-plant stage (Fig. 1a, b ). While, T5DL·5 V#5 S translocation line NAU1908, carrying Pm5V consistently demonstrated high resistance to powdery mildew across all stages and tissues (IT 0;). Trypan blue and DAB staining showed large number of spores produced in TF5V-1 seedling leaf (3-leaf) after inoculation with E09 (Supplementary Fig. 1a, c ), but very mild cell death and robust accumulation of H 2 O 2 in NAU1908 seedling leaf (Supplementary Fig. 1b, d ). Collectively, chromosome arms 5 V#4 S and 5 V#5 S conferred distinct forms against powdery mildew in wheat.
To map Pm55 and Pm5V , a genetic mapping population containing 5425 F 2 individuals was previously constructed by crossing two T5DL·5VS translocation lines, TF5V-1 and NAU1908 24 . Pm5V in NAU1908 was fine mapped to an approximately 0.9 Mb region referencing the genome sequence of wheat cv. Chinese Spring 5DS using forty-six crossovers on chromosome arm 5VS 24 . However, it was unclear whether Pm55 and Pm5V are allelic. To test their allelism, we further inoculated the F 3 and F 4 homozygous recombinant lines with Bgt isolates E09, E26 and E31 at seedling stage, and E09 at adult plant stage. As well as, sixty-three InDel markers and six EST-STS markers were used to screen these lines (Supplementary Fig. 2 , Supplementary Data 1 ). Among these forty-six lines, we identified twenty recombinant lines between makers SCA29236 and SCA12100 (Type I) with the same phenotype as NAU1908, showing all-stage resistance to the three Bgt isolates. Pm5V locus was further mapped within the interval flanked by InDel markers SCA93008 and SCA2324 , corresponding to ~140 kb on the sequenced D. villosum 91C43 DH genome 27 . Additionally, we observed another set of twenty recombinant lines between makers SCA10690 and SCA25706 (typeII) with phenotypes identical to parent TF5V-1, showing susceptibility to all three Bgt isolates at the seedling stage but resistance to E09 at the adult plant stage, with lower susceptible sheaths. This suggested that Pm55 is mapped within the interval flanked by InDel markers SCA11392 and SCA23759 . However, the remaining six recombinants between makers SCA11392 and SCA93008 (R5VS-13 to R5VS-18, type III) had contrasting phenotypes. These lines exhibited resistance to E09 and E26 but susceptibility to E31 at the seedling stage, and resistance to E09 at adult plant stage without susceptible leaf sheaths, which was distinct from the parental phenotypes of TF5V-1 and NAU1908. Based on the genotypes of type II and type III lines, one possibility is another resistance locus on 5V#5 S, flanked by InDel markers SCA11392 and SCA77919 , could provide the seedling resistance to E09 and E26 in type III recombination lines, or that another possibility is an inhibitor in the Pm55 interval suppressed Pm55- mediated resistance to E09 and E26 in type II recombination lines. Notably, none of the recombinants was susceptible to E09 at the adult plant stage, suggesting that Pm55 and Pm5V could indeed be allelic or close linkage.
To fine map the resistance or suppression locus linked with Pm55 / Pm5V on 5VS, a cross was made between TF5V-1 and the Type III recombination line R5VS-15 (Supplementary Fig. 3 ). All F 1 seedlings were susceptible to E09 but showed adult-plant resistance with lower sheaths susceptibility, similar to TF5V-1. The F 2 population segregated into 128 resistant and 394 susceptible seedlings (Supplementary Table 2 ). However, all the F 2 plants showed adult-plant resistance, but the seedling susceptible plants of them had lower sheaths susceptibility. These results indicated that there was not a resistance gene on 5V#5 S, but a dominant inhibitor linked with Pm55 on 5 V#4 S, suppressing Pm55- mediated resistance on seedlings and adult-plant leaf sheaths. Further genotyping of 3882 F 2 individuals identified six crossovers between markers SCA11392 and SCA77919 . Combination with five newly developed markers, the suppression interval was narrowed down to a ~ 100 kb region flanked by InDel markers SCA4218 and SCA39816 by comparison with the D. villosum 91C43 DH genome (Supplementary Fig. 4 , Fig. 1c and Supplementary Data 2 ). In summary, a single gene Pm5V in NAU1908 conferred resistance to powdery mildew at all stages and tissues, whereas Pm55 together with a linked inhibitor in TF5V-1 conferred the developmental-stage and tissue-specific resistance.
Dissection of the candidates for both intervals
Diploid D. villosum is an outcrossing species that may produce progenies with complex haplotypes in genome locus of interest. Thus, DNA sequences of chromosome arms 5V#4 S and 5 V#5 S are necessary to isolate the haplotypes in the aimed loci on them. Previously, we flow-sorted and sequenced chromosome arm 5 V#5 S, resulting in 173.4 Mb sequence with an average scaffold N50 size of 22.4 kb 24 . Building on this, we further isolated T5DL·5 V#4 S translocated chromosome from TF5V-1 by flow sorting and then sequenced it using Illumina short read technology (Supplementary Fig. 5a, b ). The size of the 5V#4 S de novo assembly was 171.8 Mb with an average scaffold N50 size of 18.3 kb, including 1262 high-confidence protein-coding genes annotated using protein homology-based prediction methods.
To isolate inhibitor gene, the inhibitor physical region between markers SCA4218 and SCA39816 on 5 VS of 91C43 DH was first subjected to collinearity analysis with Chinese Spring 5AS, 5BS and 5DS, revealing that the annotated genes in this region across different genomes were highly conserved (Supplementary Fig. 5c ). Subsequently, we aligned the homologies of annotated genes between 5V#4 S and 5 V#5 S scaffolds and the inhibitor physical region on different genomes. We identified two 5 V#5 S scaffolds, Scaffold4218 (20,202 bp) and Scaffold3806 (39,439 bp), and two 5 V#4 S scaffolds, Scaffold15749 (54,335 bp) and Scaffold38916 (46,667 bp), containing homologs of the genes located in the inhibitor region of 5AS, 5BS and 5DS, which generated a contiguous 100,963 bp sequence between markers SCA4218 and SCA39816 (Supplementary Fig. 5d ). This combination sequence included six annotated genes. Among them, an intact coiled-coil nucleotide-binding leucine-rich repeat (CNL) gene, CNL1 (G4), along with gene G3 were absent in the inhibitor region on 5 VS of 91C43 DH (Fig. 1d , Supplementary Fig. 5c ). Full-length transcriptome sequencing of TF5V-1 seedling leaves inoculating with Bgt isolate E09 revealed that CNL1 was the only intact transcribed gene of the six annotated genes, and therefore could be the candidate of a functional gene. The full genomic sequence of CNL1 in 5V#4 S was further confirmed by PCR, which spanned 2651 base pairs (bp) from ATG start codon to TGA stop codon with six exons and five introns, and generating a predicted 565 amino-acid protein (Fig. 1f ).
However, the 5 V#4 S and 5 V#5 S scaffolds containing the homologs of annotated genes on the genomes of CS 5AS, 5BS and 5DS between Pm55 / Pm5V linked markers SCA93008 and SCA2324 did not cover the Pm55 / Pm5V interval. Thus, we constructed a bacterial artificial chromosome (BAC) library using DNA from D. villosum 01I140 (the donor of 5V#5 S) for long-read sequencing. Screening the BAC library with markers SCA93008 and SCA2324 enabled us to identify and sequence ten BACs. Out of them, BAC5 and BAC8 overlapped the resistance region forming a contiguous 134,990 bp sequence that contained seven annotated genes, including three CNL paralogs CNL2 , CNL3 and CNL4 (Fig. 1e ). CNL2 and CNL4 were intact genes, but CNL3 was a pseudogene due to a disrupted open reading frame (ORF). CNL2 and CNL3 were highly similar (94.8% identity) whereas the similarity of CNL2 and CNL4 was lower (81.8% identity). Gene expression analysis revealed that only CNL2 was induced expression in seedlings of NAU1908 after infection with Bgt isolate E09 (Supplementary Fig. 6a ). Moreover, Pm2-5V , the Pm2 ortholog on 5 VS was also used as the control to verify the expression of the CNL2 alleles induced by Bgt infection. Results showed that CNL2#4 allele transcript levels in Type III recombination line R5VS-15 were approximately eight-fold higher at 72 hpi with isolate E09 (Supplementary Fig. 6b ), and CNL2#5 transcript levels in NAU1908 were approximately three-fold higher at 24 hpi (Supplementary Fig. 6c ). By contrast, the transcript level of Pm2 orthologs, Pm2-5V#4 and Pm2-5V#5 did not change significantly during 0–72 hpi. Accordingly, CNL2 alleles were prioritized as the functional gene candidates.
The full genomic sequence of CNL2 allele in NAU1908 (designed as CNL2#5 ) amplified by PCR was identical to those of the BAC sequences spanning 3887 bp from the translation initiation (ATG) to the stop (TAA) codons, including two exons and one intron and generated a predicted 1265 amino-acid protein (Fig. 1g ). We identified a 5 V#4 S scaffold, Scaffold23759 (23,842 bp), which contained the full length of the CNL2 allele (Fig. 1e ). Utilizing this sequence, we amplified the full length of the CNL2 allele in TF5V-1 (designed as CNL2#4 ), which was 3896 bp from the translation initiation (ATG) to the stop (TGA) codon, generating a predicted 1267 amino-acid protein. CNL2#4 and CNL2#5 shared 95.2% identity in gDNA sequence and 91.5% identity in amino acids at the protein level (Supplementary Fig. 7 ).
Validation of the CNL1 function
We used virus-induced gene silencing (VIGS) to knock down CNL1 in TF5V-1 to verify its function. A construct targeting the CC domain of CNL1 in TF5V-1 resulted in resistant leaves at seedling stage (Fig. 2a ). A comparison of mRNA expression by qPCR in TF5V-1 leaves infected with BSMV: CNL1 and wild-type BSMV: γ virus showed a significant decrease in expression levels of CNL1 transcripts (Fig. 2b ). These results indicated that CNL1 suppressed the seedling resistance of TF5V-1.
CRISPR/Cas9-mediated genome editing technology was then employed to knock out CNL1 in TF5V-1. We used guide RNAs (gRNAs) targeting the conserved regions in CC domain of CNL1 (Supplementary Fig. 8a ), and obtained four mutant lines with 4-bp (Del5VS-1), 5-bp (Del5VS-2), 6-bp (Del5VS-3) and 27-bp (Del5VS-4) deletions in the target region as putative knockouts of the CNL1 gene (Fig. 2c ). The inoculation of Bgt isolate E09 showed that the amounts of fungal growth and spores on the seedling leaves of four homozygous mutants significant reduced compared with TF5V-1 (Fig. 2d,e , Supplementary Fig. 8b to f ). Homozygous T 2 progeny from a Del5VS-4 heterozygous T 1 plant were all ASR to isolate E09 without lower leaf sheath susceptibility (Fig. 2f, g Supplementary Fig. 9a, b and Supplementary Table 3 ), confirming the role of CNL1 as the Pm55 suppression gene, hereafter designated as SuPm55 .
Additionally, we found that SuPm55 knockout plants of Del5VS-1 to Del5VS-4 created by CRISPR/Cas9 had significantly lower number of tillers compared with TF5V-1 in a powdery mildew-free field (Supplementary Fig. 10a ), resulting in lower spike number per plant (Supplementary Fig. 10c ), as well as lower grain yield per plant (Supplementary Fig. 10f ). While, there were no significant differences in plant height (Supplementary Fig. 10b ), thousand-grain weight (Supplementary Fig. 10d ) and seeds per spike (Supplementary Fig. 10e ). Similar results were come out in T 2 population derived from a Del5VS-4 heterozygous T 1 plant, which showed the tillers and spikes reduced in plants with 27 bp deletion of SuPm55 compared with plants without a deletion (Supplementary Fig. 10g, h , Supplementary Fig. 11 ). These results indicated that the inactivation of SuPm55 could have negative effect on plant fitness.
Validation of the CNL2 alleles function
To test CNL2 alleles function by VIGS, we designed silencing constructs based on two specific targets in the NB-ARC domain in CNL2#5 , and one target in the NB-ARC domain of CNL2#4 (Supplementary Fig. 12a, b ). Addition of the CNL2 silencing constructs in infected NAU1908 ( Pm5V ) and Type III recombinant line R5VS-15 ( Pm55 ) led to 50 to 80% reductions in transcript levels and abundant development of powdery mildew, whereas empty vector-inoculated plants remained resistant (Fig. 3a–d , Supplementary Fig. 12c to f ). These results demonstrated that the CNL2 alleles could be functional resistance genes in both NAU1908 and R5VS-15.
To further validate the function of CNL2 alleles, the two full-length CNL2 -derived genomic sequences with native promoters were transformed separately into susceptible wheat cv. Fielder by Agrobacterium -mediated transformation to determine whether the cloned CNL2 alleles were sufficient to confer resistance (Fig. 3e ). Subsequently, T 0 individuals were identified by PCR analysis using CNL2 -specific markers. A total of eleven and 15 positive T 0 transgenic plants with CNL2#5 and CNL2#4 , respectively, were obtained. They displayed various levels of transgene transcription (Supplementary Figs. 13a , b and 14 a, b ) and some of them were resistant to Bgt isolate E09 (Fig. 3f,g ). T 1 progeny from six positive T 0 plants segregated 3 R: 1 S or in more complex ratios, indicative of one or more CNL2 insertions (Supplementary Table 4 ). As expected, all resistant T 1 seedlings co-segregated with the presence of the CNL2 -specific marker (Supplementary Figs. 13c and 14c ). Homozygous T 2 seedlings inoculated by the other 18 Bgt isolates showed that CNL2#5 transgenic line CNL2#5T 2 -9 was resistant to all isolates similar as NAU1908. While, CNL2#4 transgenic line CNL2#4T 2 -3 was resistant to 15 isolates among them, which was similar as R5VS-15. By contrast, the non-transgenic cv. Fielder was susceptible to all races (Supplementary Table 5 ). These results demonstrated that CNL2#4 and CNL2#5 conferred broad-spectrum Bgt resistance but had distinct response spectra in wheat.
To determine whether the CNL2 alleles were required for Pm55 and Pm5V resistance, we generated EMS-mutagenized populations of NAU1908 and TF5V-1. M 1 plants were inoculated with Bgt isolate E09. Three susceptible mutants of NAU1908 were obtained and F 1 plants from their intercrosses were also susceptible (Fig. 3h ). Sequence alignments revealed a premature stop codon in the highly conserved segment of the LRR domain of CNL2#5 in line M18 (L798Stop), and missense mutations in the conserved LRR domain in lines M477 (C1065Y) and M512 (P1089L) (Fig. 3j ). In addition, ten susceptible mutants of TF5V-1 were identified in adult-plant tests (Fig. 3i ). Sequencing of the CNL2#4 in these mutants revealed SNPs causing amino acid substitutions in each mutant (Fig. 3j , Supplementary Table 6 ). The loss of function mutations occurring in CNL2#4 and CNL2#5 alleles provided additional evidence that they respectively were Pm55 and Pm5V . Taken together, Pm55 and Pm5V were confirmed to be functional alleles, hereafter renamed as Pm55a and Pm55b , respectively.
Evolution and allelic variations of SuPm55 and Pm55
To determine the evolutionary relationships of SuPm55 and Pm55 to other known R genes, we compared their protein sequences with a panel of 33 cloned NLR proteins in wheat (Supplementary Fig. 15 ). Their phylogenetic analysis revealed that Pm55a was most closely related to Pm2 (78.1% identity), and SuPm55 was most closely related to Yr10 (35.8% identity). Moreover, Pm55a and SuPm55 represented non-homologous CNL genes, as their DNA sequences shared very low identity (28.8%). Comparative genomic analysis revealed that SuPm55 homologs were absent on homoeologous group 5 chromosomes of wheat relatives rye and barley, as well as on A and D genomes of wheat, with only one copy present on the B genome. A 2875 bp deletion led to the absence of SuPm55 alleles on 5VS genomes of NAU1908 and sequenced D. villosum 91C43 DH . Additionally, SuPm55 alleles were all absent in the other five D. villosum accessions (Supplementary Table 1 ) when detection by molecular marker. Based on this, we presume that SuPm55 could be rare in D. villosum .
In comparing the genomes of Pm55 locus with that of hexaploid wheat, we revealed low sequence conservation, indicating that Pm55 locus diverged during the evolution (Supplementary Fig. 16a ). The orthologs of CNL2 , CNL3 and CNL4 were all absent in wheat chromosome arm 5AS and barley 5HS, indicating that the origin of Pm55 predates the speciation of D. villosum and barley, probably 14.9 million years ago 27 . Only one homolog of Pm55 was present in each of chromosome arms 5BS and 5DS of wheat, and 5RS of rye. These homologs should be orthologous to CNL2 , as CNL3 was a non-complete gene, and an 84 bp sequence of CNL4 was missing in all homologs in the subgenome chromosome arms (Supplementary Fig. 16b ). Pm55 orthologs in wheat and its relatives had 82 to 91% identity, significantly lower than that among Pm2 orthologs ( > 93% identity) (Supplementary Table 7 ), suggesting considerable changes after divergence of D. villosum and related species. The full-length sequences of Pm55 alleles isolated from the other five D. villosum accessions (named haplotypes Pm55_h1 – Pm55_h5 ) differed (Supplementary Table 8 ), sharing sequence identity of >93% but lower than that of the Pm2 series ( > 97%) 28 . A phylogenetic analysis clustered all predicted proteins to a single branch, consisting of sub-clusters of PM2 and PM55 homologs. All PM2 homologs clustered together phylogenetically, as did all PM55 haplotypes (Supplementary Fig. 17 ). Nonsynonymous (Ka) and synonymous (Ks) nucleotide substitution rates among PM55 full-length proteins were determined and a relatively lower Ka/Ks ratio ( < 1.0) indicated purifying selection (Supplementary Table 9 ).
SuPm55 interacts with Pm55 alleles at protein level but does not suppress Pm55b -mediated resistance
The expression analysis indicated that SuPm55 was highly expressed in non-infected TF5V-1 seedlings, but was significantly lower after the 3-leaf stage (Fig. 4a ). Additionally, SuPm55 was highly expressed in TF5V-1 leaf sheaths, with negligible expression in other organs at the adult stage (Fig. 4b ). These observations suggested that SuPm55 expression was developmentally regulated. We also detected the significant reduction of SuPm55 expression in Del5VS-4 seedlings using the qPCR primer that was located in the 27 bp deletion region (Supplementary Fig. 8a , Fig. 4c ). However, the transcript levels of Pm55a in seedlings and adult-leaf sheaths were unaltered in TF5V-1 and Del5VS-4 plants (Fig. 4d ), which indicated that SuPm55 could not suppress Pm55a transcription. A probably of functional interference between Pm55a and SuPm55 occurs at protein level.
We then conducted several assays to investigate the interaction between PM55 and SuPM55. Through Y2H assays, we found that PM55a and SuPM55 interacted through their CC domains, but not NB-ARC or LRR domains (Fig. 4e , Supplementary Fig. 18a ). This interaction was further confirmed by co-immunoprecipitation (coIP), bimolecular fluorescence complementation (BiFC) and luciferase complementation (Luc) (Fig. 4f–h ). Moreover, cell death assays demonstrated that expression of the Pm55a CC domain alone in Nicotiana benthamiana leaves induced hypersensitive response cell death in the injection region, whereas the longer CC-NBS or CC-NBS-LRR did not induce cell death responses (Supplementary Fig. 19a ), suggesting that the full CC domain of Pm55a is active in cell-death signaling. However, the CC domain of SuPm55 suppressed hypersensitive response cell death induced by Pm55a in tobacco leaves (Fig. 4i ), indicating that SuPm55 CC domain is essential to inhibit Pm55a -triggered cell death. Interestingly, expression of the Pm55b CC domain alone in tobacco leaves also induced hypersensitive response cell death (Supplementary Fig. 19b ), and the CC domains of Pm55b and SuPm55 interacted in Y2H assays (Supplementary Fig. 18b ). However, the CC domain of SuPm55 did not inhibit Pm55b -triggered cell death in tobacco leaves (Supplementary Fig. 19b ), probably due to an amino acid change in the CC domain of Pm55b (Supplementary Fig. 7 ). Furthermore, type I recombinant lines (R5VS-27 to R5VS-31) that contained SuPm55 and Pm55b showed no suppression in tests with Bgt isolates E09, E26 and E31 (Supplementary Fig. 3 ), as well as the R5VS-27 line showed similar response spectra as NAU1908 when tested with other 18 Bgt isolates (Supplementary Table 5 ). These findings collectively demonstrated that SuPm55 did not suppress Pm55b -mediated resistance.
Pyramiding SuPm55 / Pm55a and Pm55b in wheat shows no suppression or yield penalty
We crossed the T5AL·5V#4 S translocation line NAU185 ( SuPm55 / Pm55a ) with the T5DL·5 V#5 S translocation line NAU1908 ( Pm55b ) in order to combine the distinct resistance conferred by 5 V#4 S and 5 V#5 S. In the F 2 progeny, the homozygous multi-translocations line NAU2021 was identified using GISH/FISH (Supplementary Fig.20a, Fig. 5a,b ). Detailed characterization indicated that NAU2021 has the similar development stages as the background parent NAU0686 (Fig. 5c ), but displayed a reduction in plant height of approximately 5.0 cm compared to NAU0686 under field conditions without powdery mildew (Fig. 5d, e ). However, NAU2021 did not show a significant reduction in yield-related trials, such as seeds per spike and thousand-grain weight (Fig. 5f, g ), and instead showed a slight increase in spikes per plant and plot grain yield compared to NAU0686 (Fig. 5h, i ).
To determine if the suppression of resistance could occur between allelic Pm55a and Pm55b , we initially tested NAU2021 seedlings with Bgt isolates E09, E26 and E31. The results demonstrated that while Pm55a was susceptible to E31 (IT 4), NAU2021 seedlings did not compromise resistance to this isolate in Pm55b (Fig. 5j–l ). Further, we compared the resistance spectrum of R5VS-15, NAU1908 and NAU2021 tested with the 18 Bgt isolates (Supplementary Table 5 ). The Pm55a line R5VS-15 showed susceptibility to three Bgt isolates 48-28 (IT 3), Nj-16 (IT 3) and E21-4 (IT 3), out of the 18 Bgt isolates. Whereas, Pm55b line NAU1908 exhibited medium resistance to isolates 48-18 (IT 2), 48-28 (IT 2) and Nj-16 (IT 2), and high resistance to the remaining isolates. By contrast, NAU2021 displayed additive resistance, as it showed medium resistance to isolates 48-28 (IT 2) and Nj-16 (IT 2), and immunity to the remaining 16 isolates. In addition, NAU2021 also showed immunity to powdery mildew at the adult plant stage without a susceptible leaf sheath (Fig. 5c ). Overall, the combination of T5AL·5V#4 S and T5DL·5 V#5 S translocated chromosomes did not lead to mutual allele suppression of resistance or yield penalty, thus providing a strategy to pyramid distinct resistance for durable and broad-spectrum resistance to powdery mildew in wheat (Supplementary Fig. 20b ). | Discussion
Alien introgressions have greatly enriched the otherwise limited gene pool in bread wheat. However, the successful map-based cloning of introgressed resistance genes has often been hampered by insufficient recombination in the alien chromatin regions 29 . Here, we employ T5DL·5 VS derivatives of different D. villosum accessions to develop the recombinants of chromosome arm 5VS. The identification of homologous recombinants of chromosome arm 5VS contributes to the evaluation of the complex responses to powdery mildew and ultimately confirms the resistance and suppression loci on 5VS. Importantly, our mapped-based cloning efforts were facilitated using chromosome flow sorting and sequencing 5V#4 S and 5 V#5 S DNA. The sequences obtained not only helped to develop polymorphism InDel markers for genetic mapping, but also facilitated in aligning their annotated genes with the reference genomes within the interested genome locus. The alignment of the annotated genes in the SuPm55 interval with 5AS, 5BS, and 5DS of Chinese Spring revealed high collinearity, providing valuable insights into genetic variability and potential candidate genes for further analysis. Through the assembly of two 5 V#4 S scaffolds combined with two 5 V#5 S scaffolds, we obtained a contiguous sequence covering the SuPm55 interval based on the high identity sequences that they shared. This led to the identification of six annotated genes as candidates for SuPm55 . Notably, SuPm55 ( CNL1 ) is rare in D. villosum , and is absent in both 5 VS of 91C43 DH and 5V#5 S, suggesting that the 5 V#4 S flow-sorted sequence plays a key role in isolating SuPm55 . While 5 VS short sequences did not cover the Pm55 interval, the scaffolds containing upstream and downstream sequences of Pm55 alleles are still valuable for functional analysis. Therefore, a combination of the reference genome and the flow-sorted target genome sequences represents a valid approach for cloning alien genes in wheat, especially for wild outcrossing species that usually results in complex haplotypes within the interested genetic locus.
The successful cloning of alien genes Pm55 a and Pm55b adds to the repertoire of more than 10 NLR genes conferring powdery mildew resistance in wheat. Notably, Pm55 is not orthologous to Pm2 on chromosome arm 5 VS of D. villosum , which is distinct from other alien genes such as Pm8 , an ortholog of wheat Pm3 30 , and Pm21 , an ortholog of wheat Pm12 31 . While the Pm2 gene has been widely utilized in wheat breeding, it is currently losing its effectiveness against prevalent Bgt isolates in the main wheat production regions in China 32 . However, its ancestral orthologs, genetic mapping and expression analysis of Pm2-5V#4 in TF5V-1 and Pm2-5V#5 in NAU1908 revealed that they do not confer resistance. In contrast to highly conserved Pm2 homologs in wheat and related genomes, the Pm55 homologs exhibit significant sequence diversity and rapid evolution. The variation in amino acid composition between the PM55a and PM55b proteins is mainly located in the LRR domain, leading to a distinct resistance spectrum against Bgt isolates. Both Pm55a and Pm55b demonstrate broad-spectrum resistance powdery mildew based on infection assays with 21 Bgt isolates, highlighting their potential as resistance genes in wheat breeding.
Previous studies demonstrated that the CC domain heterodimerization of NLR proteins can abrogate disease resistance, as the homodimerization of NLR receptors via CC domains plays a role in R-mediated immunity 33 . For example, PigmS competitively attenuates paralogous PigmR homodimerization through heterodimers in their CC domains to suppress the blast resistance in rice 4 . However, in case of homologous NLRs suppression among Pm3 alleles, the N-terminal part of the LRR domain was identified as the major determinant of suppression 20 . In addition, SuSr-D1 , a subunit of the mediator complex gene, has been shown to suppress stem rust resistance in wheat through regulating the expression of resistance genes 19 . Here, despite Pm55 and SuPm55 encode unrelated NLR proteins, the SuPm55 CC domain can interact to suppress hypersensitive response cell death induced by Pm55a , but it does not inhibit Pm55b -triggered cell death in tobacco leaves. This suggests that the inhibition of triggered cell death may play a crucial role in the suppression of Pm55a -mediated resistance. Thus, the interactions of Pm55 alleles with SuPm55 might follow a different mechanism than that observed in interactions among homologous NLR immune receptors.
Plant NLR genes function as singletons, in pairs, or in networks in mediating resistance against pathogens 3 . Paired NLRs have been shown to regulate NLR-mediated resistance by balancing defense and fitness 11 . For examples, the rice NLR pairs RGA4 / RGA5 confer resistance to Magnaporthe oryzae 34 , and PigmS / PigmR confer rice blast resistance 4 . In these cases, one of the NLRs triggers cell death but is constitutively expressed, while the other NLR represses its partner to prevent autoimmunity. Our results demonstrate that Pm55a is induced expression by Bgt and functions as singleton in transgenic plants, whereas the expression of antagonistic gene SuPm55 is developmentally regulated by an unclear mechanism. In this case, introduction of Pm55a alone does not lead to autoimmunity in wheat, suggesting that the interaction between Pm55a and SuPm55 differs from the paired NLRs. Interestingly, the inactivation of SuPm55 significantly reduces grain yield, indicating that the expression of SuPm55 in both seedlings and adult-leaf sheaths contributes to enhancing plant fitness. These results highlight that certain singleton NLRs should not be considered as individual entities, but rather as coevolutionary components with their inhibitors when utilized in breeding programs aimed at developing high-yield, efficient disease-resistance crops.
To breed crops with effective and durable resistance, it is highly desirable to combine different forms of resistance 10 , 35 , 36 . However, resistance alleles cannot be combined through classical crossbreeding. This limitation can only be overcome through genetic engineering or the use of F 1 hybrids 37 . In our study, we achieved the combination of distinct powdery mildew resistance conferred by Pm55 alleles through T5AL·5V#4 S translocation with SuPm55 / Pm55a and T5DL·5 V#5 S translocation with Pm55b . The T5AL·5 V#4 S and T5DL·5 V#5 S translocations line NAU2021, developed by the combination of classical crossbreeding, showed no mutual allele suppression or yield penalty in the field. Moreover, NAU2021 exhibited additional effective resistance when pyramiding the two translocated chromosomes in wheat. Therefore, the multi-translocations line NAU2021 represents a valuable resource for wheat improvement and provides a practical basis for pyramiding allelic series via crossbreeding to achieve a durable resistance. | Powdery mildew poses a significant threat to wheat crops worldwide, emphasizing the need for durable disease control strategies. The wheat- Dasypyrum villosum T5AL·5 V#4 S and T5DL·5 V#4 S translocation lines carrying powdery mildew resistant gene Pm55 shows developmental-stage and tissue-specific resistance, whereas T5DL·5 V#5 S line carrying Pm5V confers resistance at all stages. Here, we clone Pm55 and Pm5V , and reveal that they are allelic and renamed as Pm55a and Pm55b , respectively. The two Pm55 alleles encode coiled-coil, nucleotide-binding site-leucine-rich repeat (CNL) proteins, conferring broad-spectrum resistance to powdery mildew. However, they interact differently with a linked inhibitor gene, SuPm55 to cause different resistance to wheat powdery mildew. Notably, Pm55 and SuPm55 encode unrelated CNL proteins, and the inactivation of SuPm55 significantly reduces plant fitness. Combining SuPm55 / Pm55a and Pm55b in wheat does not result in allele suppression or yield penalty. Our results provide not only insights into the suppression of resistance in wheat, but also a strategy for breeding durable resistance.
Powdery mildew threatens worldwide wheat production. Here, the authors report the cloning of two powdery mildew resistant Pm55 alleles and show that they exhibit distinct interactions with the inhibitor SuPm55 to cause different resistance.
Subject terms | Supplementary information
Source data
| Supplementary information
The online version contains supplementary material available at 10.1038/s41467-024-44796-0.
Acknowledgements
We thank Prof. Robert McIntosh, University of Sydney, for reviewing the manuscript. We also thank Zdeňka Dubská, Romana Šperková and Jitka Weiserová for preparation of chromosome samples for flow cytometry. This work was supported by the National Natural Science Foundation of China (32272062, 31971938); The Special Fund for Independent Innovation of Agricultural Science and Technology in Jiangsu (No. CX (19)1001); and the “JBGS” Project of Seed Industry Revitalization in Jiangsu Province (JBGS (2021) 013). IM was supported from a Marie Curie Fellowship grant award ‘AEGILWHEAT’ (H2020-MSCA-IF-2016-746253) and from the Hungarian National Research, Development and Innovation Office (K135057), KH, MS and JD were supported from the ERDF project “Plants as a Tool for Sustainable Global Ddevelopment” (No. CZ.02.1.01/0.0/0.0/16_019/0000827). Computational resources were supplied by the project “e-Infrastruktura CZ” (e-INFRA LM2018140) provided within the program Projects of Large Research, Development and Innovations Infrastructures.
Author contributions
R.Z. designed the study. C.L., J.D., H.C., S.G., Y.J., X.M., T.Z., B.F., I.M., K.H., M.S., L.X., L.K., J.D., G.L. and J.W. performed the research. R.Z., C.L. and J.D. analyzed the data. R.Z., P.C. and A.C. wrote the paper.
Peer review
Peer review information
: Nature Communications thanks Beat Keller, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. A peer review file is available.
Data availability
Data supporting the findings of this work are available within the paper and Supplementary Information files. The plant materials and datasets generated and analyzed during the present study are available from the corresponding authors upon request. Detailed genomic sequences of Pm55 ( OQ928403 ), Pm5V ( ON109832 ), SuPm5V ( OQ928410 ), Pm2-5V#4 ( OQ928409 ), Pm2-5V#5 ( OM646566 ), Pm55_h1 ( OQ928404 ), Pm55_h2 ( OQ928405 ) Pm55_h3 ( OQ928406 ), Pm55_h4 ( OQ928407 ), and Pm55_h5 ( OQ928408 ) were deposited in NCBI Genbank. The following public databases were used in this study: D. villosum 91C43 DH genome [ https://bigd.big.ac.cn/ ], IWGSC RefSeq v2.1 [ https://wheat-urgi.versailles.inra.fr/Seq ], and Triticeae genomes [ http://wheatomics.sdau.edu.cn/ ]. All primers used in this study are listed in Supplementary Table 10 . Source data are provided with this paper.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:57 | Nat Commun. 2024 Jan 13; 15:503 | oa_package/a3/7b/PMC10787760.tar.gz |
|
PMC10787761 | 38218959 | Introduction
Accumulation of evidence regarding molecular interactions in biological processes has paved the way for the construction of various biological networks, including signaling, Protein-Protein Interaction (PPI), metabolic, and gene regulatory networks, among others. These networks have found various applications, ranging from visualizing omics data 1 , 2 to enriching gene sets using topology 3 , identifying functional modules 4 , conducting causal analyses 5 , 6 , and developing computational models to understand the effects of network perturbations on cellular states 7 . Moreover, recent efforts have been directed towards associating changes in biological networks with diseases, leading to the emergence of “disease maps“ 8 – 11 . Undoubtedly, the comprehensiveness and accuracy of biological networks form the fundamental keys for their successful application in network-based research.
A number of popular knowledge bases, such as KEGG 12 , 13 , Reactome 14 , and BioCyc 15 , hold valuable information on molecular interactions in biological processes. To represent the complex relationships between biological molecules, several languages have been developed, such as KGML, BioPAX 16 , GPML 17 , and SBML 18 . However, converting this information into a comprehensive topological network has been a challenging endeavor, especially when dealing with different types of networks, such as signaling and metabolic networks. These networks often utilize distinct definitions for nodes and edges, leading to confusion and potential misinterpretations.
For instance, in signaling networks, an edge starting from node A and ending in node B, i.e., “node A activates node B”, typically implies that A is an enzyme, while B is the substrate and product of a post-transcriptional modification (PTM) reaction, resulting in the retention of the same names for both the substrate and product. In contrast, metabolic networks involve substantial changes in substrates, leading to the generation of products with new names. Therefore, in a metabolic network, an edge starting from A and ending in B, i.e., “node A generates node B”, refers to A being the substrate, and B being the product in this reaction, which significantly differs from the definitions in signaling networks. Without unifying the definitions of the nodes and edges, direct integration of signaling and metabolic networks may introduce confusion and misguidance.
Various tools have been developed to read and parse these languages, with the ability to convert the information into the Simple Interaction Format (SIF) 1 , 2 . SIF is a semi-structured format, in which each line specifies a source node, a character string describing the type of the edge(s), and one or more target nodes. However, the conversions often work better for signaling networks than for metabolic networks, as multiple substrates in metabolic reactions can lead to the ambiguity of multiple participants. Consequently, information regarding “who participates which reaction” can be lost during the conversion process.
To address these challenges, knowledge bases often visualize networks with edges pointing to edges, such as KEGG, Reactome, and Wikipathways 19 . Although this visualization is user-friendly, it is not suitable to work with common network analysis algorithms and tools. With mounting evidence suggesting the importance of crosstalk between signaling and metabolic networks, there is an urgent need to integrate these networks into a global integrative network, termed “GIN”. Efforts have been made, but mainly focus on the visualization 20 or leveraging information from PPI network 21 , leaving the signaling and metabolic networks topologically disconnected.
In this context, we propose a visualization layout called “meta-pathway” to fundamentally unify the topological structure of signaling and metabolic networks. To convert conventional pathways into meta-pathways, we introduce an intermediate node for each reaction in the pathways to represent a conceptual “intermediate” state of molecules in biochemical reactions. In most of biochemical reactions with multiple substrates or at least one enzyme, the substrate(s) and the enzyme need to get close enough to each other for the reactions to proceed, which forms the intermediate state. This intermediate state of the molecules is temporary, and will quickly be converted into products. Therefore, the intermediate nodes which come from the intermediate state of the molecules capture the relationships between molecules in real world, and enables both signaling and metabolic reactions to be considered as chemical reactions, facilitating storage in SIF-like format. By converting the pathways into meta-pathways and merging them, we have successfully built GINs for 7077 species based on KEGG 22 .
In addition to KEGG, multiple biological knowledge bases offer valuable molecular interaction data across various aspects. In this study, we have converted molecular interaction data from ten different knowledge bases into the SIF format with intermediate nodes (referred to as SIFI). Subsequently, we conducted a thorough analysis of the consensus among these interactions before integrating the GINs into a single, comprehensive network, namely GIN for human version 2.0 (GINv2.0). Our results demonstrate that this version of GIN is currently one of the most comprehensive human databases of molecular interactions, allowing for straightforward visualization and interpretation of the crosstalk between signaling and metabolic networks, exemplified through a detailed examination of the glycolysis process and the related regulative proteins. | Methods
Construction of the Global Integrative Network from ten databases
The owl files of BioPAX level 3 prepared by PathwayCommons were downloaded from https://www.pathwaycommons.org/archives/PC2/v12/ . Specifically, we selected DrugBank, HumanCyc, INOH, KEGG, NetPath, PANTHER, PID, PhosphoSitePlus (PSP), and Recon X from PathwayCommons, which contain sufficient number of biochemical reactions extracted from the owl files. The BioPAX level 3 owl file of Reactome was acquired through https://reactome.org/download-data . The owl files were parsed by function “readBiopax” from R package rBiopaxParser 57 , which generated a dataframe for each of the databases.
We built a R package to extract the reactions from the dataframe and convert them into SIFI format. Specifically, we first extracted the reactions from classes of TransportWithBiochemicalReaction, Transport, BiochemicalReaction, ComplexAssembly, Degradation, and Conversion. Then we extracted the information of the enzymes from the classes of Catalysis, Control, and Modulation, and linked the enzymes with the reactions. These information was finally organized into one temporary table.
Subsequently, we created a component matching table designed to capture the relationships between proteins and complexes. We did not use the conventional name of the complexes; instead, we adopted a distinct approach wherein the complexes were systematically deconstructed into constituent proteins through recursive processes. Then the name of the complexes were given by concatenation of all the names of the components in alphabetical order, separated by underscores (“_”) .
Next, we replaced the complex IDs in the reaction table with the name generated from the component names, and convert the reactions into SIFI format. The intermediate nodes were introduced during this conversion step. The names of the intermediate nodes were the concatenation of all the substrates and enzymes, separated by semicolon (“;”). Note that the result of this step still used the local ID system for each owl file specifically which cannot be shared with other owl files.
Since the owl file of each database provides mapping relations between local IDs and commonly used (external) IDs, we replaced the local IDs with the external IDs suggested by each database. However, each databases has its own preference on the use of the ID sources, thus we had to uniform the sources of the IDs to ensure that the same gene/chemical got the same ID in different databases. Uniprot IDs were converted to gene symbols by R package biomaRt 58 . For metabolite IDs, We constructed a mapping table using R package metaboliteIDmapping 59 , and used the strategy in Supplementary Fig. 1 to uniform the IDs with a preference of the sources. A tutorial for the conversion of KEGG’s owl file to SIFI format can be found at https://github.com/BIGchix/SIFItools .
Finally, we concatenated all the SIFI files into one single file, and removed the redundant edges. The edges containing gene IDs of other species were removed.
Network analysis of human GINv2.0
The intersection results of genes, metabolites and edges were visualized by R package UpSetR 60 . The total network and the network of glycolysis were visualized by Cytoscape 1 , 2 . The community detection was performed by python package leidenalg 34 , to efficiently work with large directed graph using the Leiden algorithm 34 .
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article. | Results
Conversion of BioPAX to SIFI
In our efforts to tackle the challenges of different knowledge base languages, we developed a R package named “SIFItools” to efficiently convert BioPAX level 3 owl files from various databases into SIFI format. OWL (Web Ontology Language) format is a powerful and expressive ontology language that allows users to define rich and complex relationships between entities. In the context of Biological Pathway Exchange (BioPAX) language, OWL is used to represent biological pathways and their components, such as molecules, interactions, and cellular processes, in a semantically meaningful way. With SIFItools, we firstly extracted biochemical reactions from the owl files of nine databases prepared by PathwayCommons 23 , including HumanCyc, DrugBank 24 , INOH 25 , KEGG, NetPath 26 , PANTHER 27 , PhosphoSitePlus (PSP) 28 , PID 29 , and Recon X 30 , as well as the owl file of Reactome from its official webpage (not from PathwayCommons). This facilitated the analysis of molecular interactions across multiple databases and laid the groundwork for building a comprehensive network for human cells (Fig. 1a, b ). Then each of the reactions was converted into the structure of meta-pathway, introducing an intermediate node (Fig. 1c ). After standardization of the 71 ID formats into seven, we analyzed the overlapping genes, chemicals and edges, then integrated all ten databases into one global integrative network, which we refer as GINv2.0 (Fig. 1d ).
Notably, although SIFItools automated much of the curation process, manual curation was still necessary due to the diverse naming conventions and special characters used in different databases. In the process of manual curation, the most complicated task involved the conversion of internal IDs from each database’s owl file to corresponding external gene or chemical IDs. This complexity arose from the fact that a single gene or chemical could have different internal IDs across various databases, each linked to one or more distinct external IDs. To overcome this challenge, we developed a two-step approach. Firstly, we constructed an ID mapping table using internal “XRef” links, enabling us to convert the internal IDs to external IDs from 71 different sources. Subsequently, we aggregated the external IDs from diverse sources into gene symbols and unified chemical ID types (UC_IDs), which includes CID 31 , SID 31 , CAS registry number, KEGG, HMDB 32 , and ChEBI 33 (Supplementary Fig. 1 ). This method ensured consistency and standardization across the databases, facilitating seamless integration of the data in our subsequent analyses.
Consensus analysis of the databases
Conversion of BioPAX level3 into SIFI format generated networks varied in the number of edges and nodes, ranging from 873 nodes (NetPath) to 5614 nodes (Reactome) (Fig. 2a ), and from 2444 edges (NetPath) to 29898 edges (Reactome) (Supplementary Fig. 2 ). Notably, the ratio between the quantity of genes and the number of chemicals exhibited variations across the databases. These variations accurately mirrored the distinct scopes of molecular interactions inherent to each individual database. For example, the SIFI format of NetPath and PSP exclusively contained human genes, while Recon X exclusively included chemical IDs (Fig. 2a ). This distinction highlights the significance of our integrative approach in capturing a comprehensive picture of human molecular interactions.
Next, we conducted an analysis of the overlapping gene symbols (Fig. 2b ), UC_IDs (Fig. 2c ), and edges (Fig. 2d ) among the ten databases. For clarity, Recon X was excluded from Fig. 2b due to its exclusive focus on chemicals. Similarly, NetPath and PSP were excluded from Fig. 2c . Our analysis revealed that the overlap of gene symbols was notably larger than the overlap of chemical IDs. For instance, in the case of Reactome, the number of unique gene symbols accounted for only 11.2% of its total gene symbols (445 out of 3981), whereas the number of unique chemical IDs represented 76.1% of its total chemical IDs (1243 out of 1633). Furthermore, we found that the overlap of interactions between databases was limited, with over 96.8% (110202 out of 113876) of the interactions being unique to each database for the majority of cases. This observation underscores the distinctiveness and database-specific nature of the interactions. The limited overlap of interactions highlights the importance of our integrative approach in leveraging data from multiple sources to build a comprehensive and interconnected network.
Integration of the ten databases
We merged the SIFI files from all ten databases to construct the raw global integrative network of human. Redundant edges were removed before importing the network into Cytoscape for visualization (Fig. 3a ). The final GINv2.0 for human comprises 39,548 nodes and 113,876 edges, encompassing 6330 genes, 3579 chemical IDs, 3957 complexes, and 25,682 intermediate nodes. To facilitate further analysis, we utilized the Python package leidenalg 34 to cluster the network into distinct sub-networks. In Fig. 3a , we presented the top 20 sub-networks with the largest number of nodes. These sub-networks exhibit diverse compositions of genes, chemicals, and intermediates. Notably, most sub-networks are a mix of genes and chemicals; however, some sub-networks, such as clusters 3, 5, 6, 8, 11, 13, 15, 17, 18, and 19, are predominantly gene-driven, while others, like clusters 4 and 16, are primarily chemical-centric (Fig. 3b ). This observation underscores the complex interplay between signaling networks and metabolic pathways, contributing to the complexity of the network.
Additionally, we calculated the topological network metrics, presented in Table 1 . Notably, the node with the highest degree was water, followed by ATP and ADP. These findings indicate that water, ATP, and ADP are central participants in biological processes within human cells, aligning well with established knowledge in the field. To gain deeper insights into specific sub-networks, we conducted a focused examination of cluster 16. We identified several nodes with high degrees, including HIF1A, KDM1A, Succinic acid, Acetyl-CoA, Formaldehyde, CO2, NADH, and NAD+ (Fig. 3c ).
To investigate the composition of the database sources of each cluster, we calculated the percentage of the edges contributed by different databases to each cluster (Fig. 3d ). Our analysis showed that ReconX’s data (only consists of chemicals) mainly presents in cluster 1 and cluster 10. For cluster 1, there are three major sources, ReconX, INOH, and HumanCyc. In cluster 10, the major sources are ReconX, Reactome, and HumanCyc. Similar results can be observed for PSP and NetPath. These evidences suggest that the databases focusing on only genes or chemicals are well mixed with other databases. On the other hand, KEGG and Reactome contribute the majority of edges of cluster 4 and cluster 16 respectively, which are chemical-centric, and cluster 15 which is gene-centric by Reactome. This suggests that these two comprehensive databases, KEGG and Reactome, who both cover signaling and metabolic pathways, may have distinct scopes of signaling and metabolic reactions.
Regulation of glycolysis by signaling proteins
To demonstrate the practical application of GINv2.0 in analyzing signaling and metabolic networks, we extracted nodes representing the metabolites, intermediates, and protein enzymes involved in glycolysis, along with the proteins that regulate these enzymes. Subsequently, we visualized the network in Cytoscape (Fig. 4 ). Glycolysis is a fundamental cellular metabolic process that converts glucose to pyruvate, generating ATP and NADH. The GINv2.0 visualization of glycolysis clearly illustrates how enzymes are linked to metabolites through intermediate nodes. Moreover, each intermediate node represents a specific reaction, effectively circumventing ambiguity arising from multiple isozymes catalyzing the same reaction.
Subsequently, we focused on the incoming nodes of the enzymes involved in glycolysis, which provided insights into the proteins regulating this crucial metabolic pathway. Our analysis revealed that, out of the ten steps comprising glycolysis, seven steps were regulated by various kinases, including SRC 35 , 36 , ULK1 37 , 38 , AKT1 39 , AKT2 40 , PRKAA1 41 , PRKCD 42 , PAK1 43 , MAPK1 44 , MAPK8 45 , GSK3B 46 , PIM2 47 , EGFR 43 , and CDK6 48 . These findings highlight the complicated control mechanisms governing glycolysis, ensuring its harmonious coordination with the activation and inhibition of other cellular pathways, ultimately balancing energy production.
Notably, we found that ULK1 is a prominent positive regulator of HK1 37 , PFKM 37 , and ENO1 37 , 38 , making it a pivotal protein in governing glycolysis based on the number of controlling enzymes. While SRC is known for its broad involvement in various cellular processes, ULK1 is well known for its essential role in initiating autophagy 49 . Building on this intriguing clue, we deeply explored the relationship between these kinases and autophagy regulation. Remarkably, seven out of the thirteen identified proteins were found to exhibit direct or indirect regulatory effects on autophagy. Protein Kinase AMP-Activated Catalytic Subunit Alpha 1 (PRKAA1), the catalytic subunit of AMPK, plays a crucial role in autophagy initiation under glucose deprivation by directly phosphorylating ULK1 41 . Additionally, AKT suppresses tuberous sclerosis complex proteins 1/2 (TSC1/2) through phosphorylation, leading to mTORC1 activation and subsequent autophagy inhibition 50 . Reports have shown that GSK3B promotes ULK1 acetylation by mediating KAT5/TIP60 phosphorylation during starvation 51 . Furthermore, MAPK8 activates autophagy by mediating BCL2 phosphorylation, facilitating the dissociation of BCL2 from BECN1 52 . Finally, emerging evidence suggests that PIM2 is capable of phosphorylating HK2, thereby promoting autophagy under glucose deprivation 53 .
Collectively, these findings indicate a synergistic regulation of glycolysis and autophagy, particularly under glucose-starved conditions, enriching the understanding of cellular adaptation to varying nutrient availability. In summary, our comprehensive network analysis empowers researchers with fresh perspectives on the cross-talk between metabolic and cellular regulatory networks, paving the way for deeper investigations into the underlying molecular complexities. | Discussion
In this work, we compiled a much more comprehensive GIN for human compared with the previous version. The previous GIN 22 for human was built only upon KEGG, which includes 5145 genes and 1501 metabolites. In the present work, we compiled a new GIN for human from ten different databases, which involved 6330 genes and 3579 metabolites, with 23.0% and 138.4% increase, respectively. The new GIN for human is much more useful than the previous one, as the integration of various databases greatly enhances the comprehensiveness of the network. This is exemplified by the demonstration of the orchestrated regulation of autophagy and glucose metabolism under stress, which leveraged information from multiple databases.
We also offer a new tool for the conversion of BioPax level3 files into SIFI format. In our previous work, we built the GIN by a pipeline of perl scripts specifically written to parse KGML files. Since the use of KGML format is currently limited to KEGG, our previous pipeline lacks the ability to process the files of other databases. In our present work, we construct a R package (SIFItools) which can convert BioPax level3 files into SIFI format with minimum manual curations required. Because many biological databases share their data in BioPax level3 format, our new package, SIFItools is more convenient and have much more potential applications when building GIN from databases.
Also in this work, we compared the overlapping information between the 10 databases. We were not able to conduct such analysis in our previous work since we only converted KEGG database into GIN. In our present work, we compared the molecular interactions of the 10 databases and surprisingly found that the overlaps between the databases were rare. Although this could partly due to the different focus of the databases, there is still a great proportion of unmatched ids, especially for metabolites. This could lead to confusing results when applying over-representation analysis (ORA) of pathways, as pointed out by another work 54 .
The knowledge bases of pathways serve as repositories for capturing molecular interactions in both physiological and pathological contexts. While each database emphasizes distinct molecular interactions, synthesizing the collective insights from various sources can outline the comprehensive scope of these knowledge bases. However, the exploration of consensus among diverse knowledge bases has been limited, in part due to the varying data formats used by each database. Despite PathwayCommons’ efforts to standardize data formats, the inherent features of XML format have posed challenges for direct cross-database comparisons. For example, in BioPAX level3, key information about a given reaction may be dispersed across properties such as “left”, “right”, “product”, “controlled”, “controller”, or “cofactor.” This distribution necessitates the extraction of reaction details from multiple attributes to facilitate comparison, thereby complicating and impeding the efficiency of the process. The introduction of meta-pathways and SIFI format has alleviated this predicament by structuring reaction information into a SIF-like three-column configuration. This transformation enables rapid comparisons between reactions, streamlining the comparative analysis.
We noticed that there are overlaps between the concepts of meta-pathway, SIFI format and GIN. To clarify the definitions of the three concepts: (1) Meta-pathway is the way of displaying pathways using intermediates to connect the substrates and products. (2) The GIN (Global Integrative Network) is a network combining the molecular interactions from all pathways. (3) SIFI (Simple Interaction Format with Intermediates) is a format we use to store the molecular interactions of meta-pathways and GIN. The differences between the three concepts are: meta-pathway is the component of GIN, while they can be both stored in SIFI format.
The consensus analysis of GINs generated from different databases highlighted significant diversity across the databases, particularly concerning the edges and nodes related to metabolites. This observed diversity could potentially rise from variations in the specific focus of each database or disparities in naming conventions. Such variations raise valid concerns regarding the reliability of metabolite enrichment analysis, aligning with findings from a recent investigation into the ORA of pathways leveraging metabolomics data. Notably, the authors of this study revealed significant disparities in ORA results when employing distinct databases, such as KEGG, Reactome and BioCyc 54 , which may partially due to the inconsistency we found in our consensus analysis.
The credibility of the edges is also important for network analysis, since questionable edges will create misleading path when conducting path-related network analysis, as evidenced in our previous work 22 . In the comparative analysis of different databases, repeated edges may be more credible since it has been repeatedly validated by different databases. In fact, one of our original goals to compare the edges from different databases was to score the edges based on the number of repeats. However, with the analysis of the databases, we found that a large number of the non-redundant edges are the results of the variations of the scopes of databases. For example, in Fig. 4 , the edges extracted from the PSP database are not found in any other databases, but all of these edges have credible sources of publications. This means that a large proportion of non-redundant edges may be credible. Based on this consideration, we excluded the analysis of the credibility of edges in our current work.
By analyzing GINv2.0, we found that the number of intermediate nodes was substantially larger than the combined count of both genes and metabolites. Since each intermediate node represents a distinct biochemical reaction, the number of genes/metabolites involved in a pathway, which is often used in conventional enrichment analysis such as GO, may not truly reflect the number of reactions associated with the pathway. For instance, consider a scenario where five genes are shared between the input gene set and a pathway gene set. While ORA and GSEA 55 , 56 might not distinguish whether these five genes participate in one single reaction or five distinct ones, the possibilities of significant associations between the input and pathway gene sets are distinct, judging by instinct. Thus, the intermediate nodes are likely a hidden layer reside between the genes/metabolites and pathways, which has not been investigated for enrichment analysis. The construction of GINs is therefore, a starting point for building the relations between genes/metabolites, intermediate states, and pathways, and further promote the improvement of gene set/pathway analysis.
The illustration of the glycolysis process and the regulative proteins underscores the benefits of the integration of multiple knowledge bases. Notably, we found that the core nodes and edges of the glycolysis process was primarily derived from KEGG, Reactome, HumanCyc, and INOH, while the regulatory interplays between kinases and glycolytic enzymes were from PSP. Individual GINs of any single databases were not able to provide such comprehensive view of molecular interactions. This demonstrates the necessity of database integration to forge a comprehensive and unified network.
In the current version of GIN (v2.0), the intermediate nodes are built for metabolic reactions and PTM reactions, but not for PPI. The reason for excluding PPI is that the GIN we built is a directed graph, but PPI networks are undirected, therefore, current PPI data does not fit for GIN we built. However, we are working on the solution to generate appropriate intermediate nodes for the complexes with multiple protein participants in PPI. With the flexibility of the meta-pathway’s structure, other types of data regarding molecular interactions in cells, including the relations of transcription factors (TFs) and their targets, miRNAs and their targets, will soon be incorporated in GINs as well. | Knowledge bases have been instrumental in advancing biological research, facilitating pathway analysis and data visualization, which are now widely employed in the scientific community. Despite the establishment of several prominent knowledge bases focusing on signaling, metabolic networks, or both, integrating these networks into a unified topological network has proven to be challenging. The intricacy of molecular interactions and the diverse formats employed to store and display them contribute to the complexity of this task. In a prior study, we addressed this challenge by introducing a “meta-pathway” structure that integrated the advantages of the Simple Interaction Format (SIF) while accommodating reaction information. Nevertheless, the earlier Global Integrative Network (GIN) was limited to reliance on KEGG alone. Here, we present GIN version 2.0, which incorporates human molecular interaction data from ten distinct knowledge bases, including KEGG, Reactome, and HumanCyc, among others. We standardized the data structure, gene IDs, and chemical IDs, and conducted a comprehensive analysis of the consistency among the ten knowledge bases before combining all unified interactions into GINv2.0. Utilizing GINv2.0, we investigated the glycolysis process and its regulatory proteins, revealing coordinated regulations on glycolysis and autophagy, particularly under glucose starvation. The expanded scope and enhanced capabilities of GINv2.0 provide a valuable resource for comprehensive systems-level analyses in the field of biological research. GINv2.0 can be accessed at: https://github.com/BIGchix/GINv2.0 .
Subject terms | Supplementary information
| Supplementary information
The online version contains supplementary material available at 10.1038/s41540-024-00330-y.
Acknowledgements
We thank Yan Yan for her insightful discussion on SIFItools, and Siyi Su for his help on the glycolysis illustration.
Author contributions
Xu.C. and Xiao.C. conceived the study; S.Y. developed SIFItools; Xiao.C., Y.Z., L.L., Z.G. and X.L. collected data and performed conversion; Xu.C., S.Y. and Y.Z. analyzed the data; Y.Z. performed network visualization; Xu.C. wrote the manuscript; All authors helped in the writing of the manuscript. All authors approved the final version of the manuscript.
Data availability
We used the BioPAX level3 owl files prepared by PathwayCommons 23 which can be accessed here ( https://www.pathwaycommons.org/archives/PC2/v12/ ). The BioPAX level 3 owl file of Reactome was acquired through https://reactome.org/download-data . The GINv2.0 generated in this work can be freely accessed from github: https://github.com/BIGchix/GINv2.0 .
Code availability
The R package “SIFItools” can be freely accessed and installed from github: https://github.com/BIGchix/SIFItools .
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:57 | NPJ Syst Biol Appl. 2024 Jan 13; 10:4 | oa_package/88/16/PMC10787761.tar.gz |
|
PMC10787762 | 38218942 | Introduction
The ‘use of opportunistic modes of nutrient acquisition’ was recently described as a hallmark of cancer cells [ 1 ], living in a nutrient-poor microenvironment. Cancer cells must adapt their metabolism to support biomass production, ATP generation and maintain a redox state. Disrupting these processes can interfere with both tumor growth and proliferation [ 2 ]. Amino acids, like other biomacromolecules, play an important role in rapidly proliferating cancer cells, as carbon and nitrogen donors to get rid of the nutrition limitation [ 3 ]. Thus, amino acid metabolism has been extensively studied following glucose metabolism in tumor.
While the definition of essential amino acids (EAAs) and nonessential amino acids (NEAAs) is appropriate for normal cells, the classification does not apply to cancer cells [ 4 ]. Altered amino acids metabolism is common in tumors, and non-essential amino acids usually become essential in tumors. Thus, targeting certain kind of amino acids has the potential to control specific tumors. Targeting asparagine metabolism enzyme such as asparaginase have the potential to treat leukemia, which is currently in clinical use. Furthermore, targeting molecules in amino acid metabolic signaling pathway also has the potential to treat tumors [ 5 ]. For example, targeting mammalian target of rapamycin (mTOR) can control the growth of various tumors including breast cancer, kidney cancer, neuroendocrine cancer and so on. Other key molecules including MYC, and KRAS in amino acid metabolic signaling pathway are also burgeoning approaches for tumor biotherapy.
In addition to altered nutrient and signal pathway, solid tumors are known to recruit immune cells in the stroma and create favorable conditions for their growth and survival [ 6 ], which is known as tumor microenvironment (TME). Cells in TME could not only resist immune surveillance and drug therapy, but also provide amino acids to tumors to meet their growth needs. Thus, restricting amino acids in TME is an effective way to limit tumor growth [ 7 ]. Besides, amino acids also play an important role in epigenetics like DNA methylation and histone modification [ 8 ]. Improving our understanding of its role in tumor progression and immune evasion could provide novel ideas for metabolic cancer therapy. | Amino acid metabolism plays important roles in tumor biology and tumor therapy. Accumulating evidence has shown that amino acids contribute to tumorigenesis and tumor immunity by acting as nutrients, signaling molecules, and could also regulate gene transcription and epigenetic modification. Therefore, targeting amino acid metabolism will provide new ideas for tumor treatment and become an important therapeutic approach after surgery, radiotherapy, and chemotherapy. In this review, we systematically summarize the recent progress of amino acid metabolism in malignancy and their interaction with signal pathways as well as their effect on tumor microenvironment and epigenetic modification. Collectively, we also highlight the potential therapeutic application and future expectation.
Subject terms | Facts
Altered amino acid metabolism in tumors challenges the traditional classification of essential and nonessential amino acids. Amino acids have emerged as pivotal regulators in tumors, participated in a myriad of bidirectional interactions including signal pathways, tumor microenvironment, and epigenetic modifications. Clinical trials align with the idea that limiting amino acid intake may improve cancer prognoses.
Open Questions
Among the several effects that are simultaneously regulated by certain amino acid, is there a chief effect that determines the progression or repression of tumor? What are the optimal strategies and urgent challenges for the clinical translation of amino acids-based therapies in the near future? Is the altered amino acids metabolism, described in different tumors, causally linked to their tumor etiology and pathogenesis?
Reprogrammed amino acid metabolism in cancer
A number of cancers have been found auxotrophic for NEAAs [ 9 ]. It may be that the demand for proliferate is too large and exceed the supply, or the related enzymes are mutated, or metabolic pathways are dysregulated. These amino acids are named conditional EAAs. Considering the swift proliferation of tumors within an environment deprived of nutrients, the composition of amino acids frequently displays instability. This fluctuation in amino acids can significantly impact overall cellular metabolism, ultimately culminating in cell proliferation or death [ 10 ]. Therefore, the 20 standard proteinogenic amino acids, including conditional EAAs (glutamine, arginine), EAAs (branched-chain amino acids, tryptophan), and non-essential amino acids (asparagine, aspartate) play flexible roles in protein synthesis or energy supply activities in tumor.
Glutamine metabolism
Glutamine (Gln) is a conditional EAA, which is not essential for normal cells but becomes crucial for tumor cells due to their heightened demand. It is the most abundant amino acid found in plasma, and the most rapidly consumed amino acid in tumor cells [ 11 ]. As an EAA in tumors, glutamine participates in rapid biosynthetic reactions in tumor. Tumor cells utilize glutamine avidly, known as glutamine addiction [ 12 ]. Thus, it always functions as the rate-limiting molecule of the cell reproductive cycle. Once deprived of glutamine, cancer cells usually arrest in S-phase [ 13 ]. Additionally, glutamine also plays a crucial role in maintaining redox homeostasis, replenishing the tricarboxylic acid (TCA) cycle, and participating in signal transduction processes within tumors [ 14 ].
ASCT2(SLC1A5) is the main glutamine transporter in tumor (Fig. 1 ). It is regulated by multiple tumor associated transcription factors including Rb/E2F [ 15 ], androgen receptor 3 [ 16 ] and ATF4 [ 17 ]. ASCT2 is highly expressed in tumor tissue, and its expression level is negatively correlated with patients’ prognosis. As ASCT2 transports glutamine for tumor consumption, inhibiting ASCT2 induces apoptosis and exhibits anti-cancer activity in acute myeloid leukemia [ 18 ], gastric cancer [ 19 ], prostate cancer [ 20 ], and triple-negative breast cancer [ 21 ]. In addition, tumor cells are capable of synthesize glutamine by themselves from glutamate (Glu) and ammonia. Glutamine synthetase (GS) is highly expressed in cancer cells to support their rapid proliferation. Moreover, GS can also promote cell proliferation independently of its catalytic function, only by interacting with nuclear pore protein [ 22 ]. Therefore, tumor cells acquire a substantial amount of glutamine through both intrinsically synthesis and extrinsically uptake, emphasizing its critical role in tumor reprogrammed metabolism.
Glutaminolysis is a process catalyzed by glutaminase 1(GLS1) or GLS2 to produce glutamate [ 23 ]. GLS1 and GLS2 are isozymes that play opposite roles in tumor development [ 24 ]. GLS1 has oncogenic properties, while GLS2 has been described as a tumor suppressor. Numerous studies have reported a higher expression of GLS1 and lower expression of GLS2 in various tumor types, including liver cancer and colorectal cancer [ 25 , 26 ]. GLS1 is regulated by the oncogenes MYC [ 27 ], Rho GTPases [ 28 ] and Notch [ 29 ]. In colorectal cancer cells, GLS1 is essential to tumor growth, invasion, and metastatic colonization. Mechanically, under hypoxia TME, HIF-1 activates the expression of the GLS1 to promote tumor migration, invasion, and metastatic colonization [ 26 ]. Besides, GLS1 plays a crucial role in boosting the production of GSH and NADH, contributing to oxidative balance maintain to promote tumor proliferation [ 30 ]. Conversely, GLS2 expression is transcriptionally upregulated by tumor suppressor and stress-related proteins, including p53, p63, and p73. Although GLS2 plays a role in the production of GSH, it is worth noticing that GLS2 modestly regulates the ratio of GSH/GSSG, which is essential to oxidative balance. Different from GLS1, GLS2 catalyzes glutamate metabolism to promote α-ketoglutarate (α-KG) production, participating in TCA cycle and thereby facilitating the production of lipid ROS. The accumulation of ROS leads to mitochondrial membrane hyperpolarization and thereby inducing ferroptosis [ 25 ]. In addition to glutaminolysis, glutamine can be metabolized into intermediates like carbamoyl phosphate (CP) and phosphoribosyl amine (PRA) for the synthesis of purines and pyrimidines, which are essential components for DNA synthesis and repair during rapidly tumor proliferation [ 31 , 32 ].
Glutamate, the metabolite of glutaminolysis, provides important resources for energy and biomacromolecule synthesis in tumor (Fig. 1 ). In tumors under glucose-limited condition, glutamate acts as a substitute for glucose, producing the intermediate α-KG to facilitate the TCA cycle. However, the provision of α-KG alone is inadequate for sustaining the TCA cycle. This insufficiency arises from the limited availability of acetyl-CoA, a rate-limiting molecule in TCA cycle. Studies have found that in glutamine addictive tumor, mitochondrial phosphoenolpyruvate carboxykinase (PCK2) expression was elevated, facilitating phosphoenolpyruvate (PEP) production from oxaloacetate generated in the TCA cycle [ 33 ]. Thus, glutamine-derived PEP acts as a substitute for glucose-derived PEP, offering a valuable source to acetyl-CoA production that replenishes the TCA cycle [ 5 ]. Glutamate and α-KG also participate in the transaminate to other non-essential amino acids. In addition, α-KG serves as both substrate and cofactor for DNA dioxygenase enzymes involved in DNA demethylation [ 34 ] (Detailed later in amino acid metabolism and epigenetic modification).
Arginine metabolism
Arginine (Arg) is also identified as a conditional EAA in tumor [ 5 ]. It can be synthesized de novo under the catalysis of argininosuccinate synthase 1 (ASS1) and argininosuccinate lyase (ASL) from aspartate and citrulline in urea cycle (Fig. 1 ). However, ASS1, the rate-limiting enzyme in urea cycle, is usually downregulated in cancer and its downregulation has been reported to be associated with advanced tumor stage [ 35 ]. ASS1 downregulation redirects aspartate from urea production towards pyrimidine biosynthesis, facilitating the high demand of rapid tumor proliferation. This phenomenon is commonly known as urea cycle dysregulation [ 36 ]. Since tumors underexpress urea cycle related enzymes like ASS1 and therefore downregulate arginine synthesis, exogenous arginine supply is critical for tumors survival and proliferation. Arginine can be obtained through cationic amino acid transporter (CAT) family transporters, including CAT-1, CAT-2 and CAT-3. CATs are always upregulated in many human cancers [ 37 ]. Thus, in arginine-dependent cancer cells, CATs knockdown can decrease the viability of cancer cells and induce apoptosis [ 38 ].
Arginine can be hydrolyzed by arginases (cytoplasmic, ARG1; mitochondrial, ARG2) and both arginases are upregulated in cancer cells to ensure the production of polyamines. ARG1 is upregulated in a wider range of tumors compared to ARG2. ARGs convert arginine to urea and ornithine. In tumors, ornithine is metabolized by upregulated ornithine decarboxylase (ODC) into polyamines including putrescine, spermidine, and spermine [ 39 ]. Polyamines are well known for their crucial role in tumor proliferation and DNA stability [ 40 – 42 ]. They facilitate cell proliferation by increasing DNA synthesis through the activation of enzymes such as DNA polymerases, helicases, and DNA ligases. Besides, cellular protein synthesis is also positively correlated with polyamines. Moreover, natural polyamines function as free radical scavengers. Thus, the strong affinity between polyamines and DNA enables the stabilization of DNA structure, granting polyamines the capacity to protect nucleic acids from damage [ 43 , 44 ].
In addition, arginine can produce NO under the catalysis of nitric oxide synthase-2 (NOS-2) in tumor and macrophage. NO affects TME and tumor proliferation by promoting angiogenesis [ 45 ]. Besides, NO derived peroxynitrite can nitrate tyrosine residues and block tyrosine protein phosphorylation, reducing T cell proliferation and activation [ 46 ].
Branched-chain amino acid (BCAA) metabolism
BCAAs, namely isoleucine (Ile), leucine (Leu), and valine (Val), are closely interconnected and classified as EAAs, whether in normal or tumor cells. Changes in the level of one of the BCAAs are accompanied by changes in the other two with the same direction and magnitude [ 2 ]. As EAAs, BCAAs cannot be synthesized in human and thus corresponding transporters are critical. LAT1(SLC7A5) and LAT2(SLC7A8) serve as primary transporters for BCAAs [ 47 , 48 ] (Fig. 1 ), exhibiting high expression levels in glioblastoma and clear cell renal cell carcinoma [ 49 , 50 ]. Drugs targeting LATs (BAY-8002, JPH203, OKY034 etc.) have already been used in preclinical treatment of cancer [ 51 , 52 ].
BCAAs affect protein synthesis either by transmitting the signal of cell nutritional state or acting as proteinogenic amino acids [ 53 ]. BCAAs accumulation mainly promotes mTORC1 activition to enhance tumor development and growth [ 54 ]. Explicitly, mTORC1 triggers a cascade of signaling pathway through phosphorylating its downstream effectors, including eukaryotic translation initiation factor 4E binding protein 1(4EBP-1), p70 ribosomal S6 kinase 1 (S6K1), and sterol regulatory element binding protein (SREBP), to regulate autophagy and synthesize lipids, nucleotides and proteins [ 55 ] (Detailed later in signal pathways in amino acid metabolism). For another, BCAAs, especially leucine, are essential for protein synthesis as they are in great demand in new protein translation [ 56 ].
BCAAs catabolism and related enzymes are closely related to tumorigenesis. Leucine, isoleucine and valine catabolism is mediated by BCAA transaminase 2 (BCAT2) to produce branched chain α-keto acid (BCKA) including α-ketoisocaproate (α-KIC), α-ketoamethylvalerate (α-KMV) and α-ketoisovalerate (α-KIV) respectively. Subsequently, BCKAs like α-KIC can undergo further metabolic conversion into acetyl-CoA, while α-KIV can be metabolized into succinyl-CoA. As for α-KMV, it can undergo further metabolic conversion into both acetyl-CoA and succinyl-CoA. These metabolites actively participate in the TCA cycle. Thus, BCAAs catabolism is critical for the development of cancers, especially pancreatic ductal adenocarcinoma [ 54 ]. BCAAs also play a vital role in the synthesis of nucleotide through sustaining the levels of the enzyme ribonucleotide reductase regulatory subunit M2 (RRM2) [ 57 , 58 ]. Since BCAAs are tightly related to tumors, altered BCAAs level in blood can predict the development of certain tumors in both humans and mouse [ 59 ].
Tryptophan metabolism
Tryptophan (Trp) is also an EAA as its anabolism is absence in vivo. Tryptophan is involved in inherent malignant characteristic of tumors and can limit tumor immunity. The most important metabolic pathway for tryptophan is the kynurenine pathway. Free rather than albumin-binding tryptophan can be catalyzed by indoleamine 2,3-dioxygenase1 (IDO1) and tryptophan 2,3-dioxygenase (TDO) to produce kynurenine [ 60 ] (Fig. 1 ). Along the kynurenine pathway, a series of biologically active molecules are produced to influence tumor progression. The primary metabolite kynurenine has been reported to block T cell proliferation and induce T cell death [ 61 ]. Advanced cancers were associated with an increased kynurenine/tryptophan ratio, indicating that kynurenine level is correlated with tumor malignancy [ 62 ]. Under the catalyze of kynurenine 3-monooxygenase(KMO) and kynureninase(KYNU), kynurenine can be further catabolized to NAD + and alanine. Kynurenine pathway is known as the de novo NAD + synthesis pathway, exhibiting potent resistance to oxidative stress and promoting cancer cell metastasis [ 63 ]. In vivo studies have revealed that changes in tryptophan metabolism can decrease NAD + synthesis and DNA damage, thereby promoting hepatocarcinogenesis [ 64 ]. Besides, alanine is deleterious for spheroid growth and thereby suppresses cancer progression [ 65 ].
Apart from kynurenine pathway, tryptophan can also be metabolized in 5-hydroxytryptamine (5-HT) pathway and indole pathway, accounting for less than 5% of tryptophan metabolism [ 66 ]. 5-HT is also called serotonin, which has more recently emerged as a growth factor for human tumor cells of different origins [ 67 ]. Serotonin enhanced expression of PD-L1 on mouse and human cancer cells in vitro via serotonylation, covalent bonds formation between glutamine residues and serotonin, resulting in tumor progression [ 68 ]. Along with the indole pathway, indole production activates the aryl hydrocarbon receptor (AhR) in tumor-associated macrophages (TAMs), and thus inhibits intratumoral CD8 + T cell function [ 69 ]. Tryptophan metabolism also plays a vital role in the TME, see below amino acid in TME for details.
Increased kynurenine pathway is correlated with tumor progression. In tumors such as non-small cell lung cancer (NSCLC) and esophageal squamous cell cancer, higher IDO1 and TDO expression in kynurenine pathway is associated with higher TNM stage and shorter overall survival [ 70 ]. IDO1 expression can be either triggered as a counter regulatory response to cytokines like IL-1β and IL-6 released from tumor-infiltrating immune cells or maintained through tumor-intrinsic oncogenic signaling [ 71 , 72 ]. Studies found that intratumoural IDO1 expression has been shown to correlate with the frequency of liver metastases in colorectal cancer [ 73 ]. Besides, overexpression of IDO1 augmentes the motility of lung cancer cells, whereas its knockdown reduced cancer motility [ 74 ]. TDO, an enzyme that catalyzes the same reaction as IDO1, is also linked to a poor prognosis when overexpressed [ 66 ]. In a mouse model of lung cancer, inhibited TDO resulted in a reduction in the number of tumor nodules in the lungs [ 75 ].
Asparagine and aspartate
Asparagine (Asn) and aspartate (Asp) are inter-convertible and are classified as non-essential amino acids, which play vital roles in tumor proliferation and metastasis. Asparagine synthase (ASNS) catalyzes aspartate to generate asparagine, while asparaginase (ASNase) catalyzes asparagine to produce aspartate (Fig. 1 ). Another way to produce aspartate is to use amino from glutamate and OAA from TCA cycle under the catalyze of glutamic oxaloacetic transaminase (GOT). Aspartate is the limiting metabolite for proliferation in tumors under hypoxia, which level is correlated with hypoxic markers [ 76 ]. Besides, aspartate has poor cell permeability, which prevents its environmental acquisition. Therefore, inhibited intracellular aspartate synthesis and limited extracellular aspartate uptake represse tumors proliferation [ 77 ]. However, asparagine can be efficiently imported into tumors. Tumors with high ASNase expression can rescue tumor suppression through conversion of asparagine into aspartate, bypassing intrinsic aspartate limitation and promoting tumor growth [ 78 ].
Asparagine and aspartate metabolism plays multiple roles in tumor progression. Aspartate originally participates in urea cycle. However, in many tumors, loss ASS1 in urea cycle promotes cancer proliferation by diversion of aspartate substrate towards carbamoyl-phosphate synthase 2 (CPS2), aspartate transcarbamylase (ATC), and dihydroorotase (DHO), enzymes that catalyze the first three reactions in the pyrimidine synthesis pathway, resulting in increased tumor progression [ 36 ]. Besides, asparagine export is accompanied by reverse transport of serine, arginine and histidine. Thus its intracellular level is critical for various amino acids uptake and therefore protein synthesis [ 79 ]. Proteomic studies have shown that asparagine is specifically enriched in proteins associated with epithelial-mesenchymal transition (EMT) and restricting its availability hampers the level of EMT associated proteins [ 80 ].
Despite regulating pyrimidine and EMT-related proteins synthesis to promote tumor progression, asparagine also regulates mesenchymal-epithelial transition (MET) to complete tumor colonization at distant metastatic sites [ 81 ]. Mechanically, the scarcity of glutamine at distant metastatic sites, coupled with the heightened bioavailability of asparagine within these sites, triggers the activation of GS [ 82 ]. This activation propels glutamine biosynthesis, fostering the accumulation of HIF1α and MYC, which are pivotal factors in metastatic processes. The relative abundance of asparagine and glutamine may thus have critical effects on tumor cells at metastatic sites. Besides, HIF1α and MYC are associated with increased oxidant stress and play important roles in the transition of EMT-like tumor cells to MET-like state, which is necessary for metastatic colonization. Thus, in an aggressive breast cancer model, ASNS upregulation promotes metastasis and results in the development of widespread metastases in the brain, liver, and lungs [ 80 ]. Consistently, asparagine restriction can repress above processes and prolong the survival of patients [ 40 ].
Serine/glycine and one-carbon metabolism
As one-carbon donors in the folate cycle, serine, glycine and their associated enzymes significantly contribute to nucleotide synthesis, methylation reaction and redox homeostasis to promote tumor progression. Serine hydroxymethyltransferase (cytoplasmic, SHMT1; mitochondrial, SHMT2) catalyzes the transfer of carbon from serine to tetrahydrofolate (THF), resulting in the formation of 5,10-methylene-THF, which is essential for nucleotide synthesis to fuel rapid tumor proliferation (Fig. 1 ). Large-scale genomic study of human tumors reveals that SHMT2 is essential for cancer cell survival and its knockdown severely impairs cancer cell proliferation [ 83 , 84 ]. Methylenetetrahydrofolate dehydrogenase2 (MTHFD2), acting as one-carbon metabolism related enzyme, is upregulated and associated folate cycle with methionine cycle to promote S-adenosyl methionine (SAM) production in tumor cells [ 85 ]. Thereby, serine/glycine metabolism contributes to methylation of genes and proteins as well as maintains redox homeostasis [ 86 ] (Detailed later in amino acid metabolism and epigenetic modification). Additionally, post-translational modifications of those metabolic enzymes also play a regulatory role in tumor metabolism and progression. Deacetylation of SHMT2 by SIRT3 promotes its enzymatic activity, increases serine consumption and finally promotes colorectal carcinogenesis [ 87 ]. When MTHFD2 is hyperacetylated, its enzymatic activity is inhibited and thereby NADPH level is downregulated. SIRT3 is also responsible for MTHFD2 deacetylation to maintain redox balance, which can be inhibited by cisplatin in colorectal cancer cells [ 88 ].
As a non-essential amino acid, the synthesis of serine is vital to tumors. De novo serine synthesis pathway (SSP) startes from 3-phosphoglyceric acid (3PG) generated from glycolysis, catalyzed by enzymes phosphoglycerate dehydrogenase (PHGDH), phosphoserine aminotransferase (PSAT), and phosphoserine phosphatase (PSPH). As the rate-limiting enzyme, tumor cells highly express PHGDH to counteract limited serine availability [ 89 ]. Conversely, RNF5, an E3 ubiquitin ligase, mediates PHGDH degradation and suppresses tumor progression [ 89 , 90 ]. With enough serine, serine palmitoyltransferase (SPT) catalyzes the de novo biosynthesis of sphingolipids. However, when serine synthesis is limited, SPT will instead use alanine as a substrate to synthesize cytotoxic deoxysphingolipids and then suppress tumor [ 65 ]. Thus, maintaining certain level of serine is necessary for tumor cells to escape from cytotoxic suppression.
Mitochondrial SHMT2 is the primary catalyst for glycine production from serine, thereby promoting the folate cycle. Elevated glycine level is linked to cancer progression like multiple myeloma (MM) and lymphoma [ 91 ]. The glycine concentration in the bone marrow is elevated due to bone collagen degradation mediated by MM cell-secreted matrix metallopeptidase 13 (MMP13) [ 92 ]. Although glycine is a non-essential amino acid, experiments have shown that limiting the supply of exogenous glycine induces tumor cells arrested in the growth phase (G1 phase). It is worth noting that glycine is required for nucleotide biosynthesis, directly supplying carbons for de novo purine biosynthesis, or donating one-carbon unit to the folate pool via the mitochondrial glycine cleavage system under the catalyze of glycine dehydrogenase (GLDC) [ 93 ]. Besides, remodeled glycine metabolism mediated by protein arginine methyltransferase 7 (PRMT7) induces toxic death of leukemia stem cells [ 12 ]. Mechanistically, PRMT7 loss resulted in reduced expression of glycine decarboxylase, leading to the reprogramed glycine metabolism to generate methylglyoxal, which is detrimental to leukemia stem cells.
Signal pathways in amino acid metabolism
Conventional understanding states amino acids as essential building blocks for peptide and protein synthesis. However, recent research has shed light on the profound significance of amino acids as bioactive molecules that play active roles in signaling pathways and metabolic regulation. mTOR, MYC and KRAS, which sense cellular amino acid levels and orchestrate these signals in a coordinated manner, play crucial roles in maintaining cellular metabolic homeostasis. Importantly, not only changes in amino acid levels impact signal pathways, but alterations in signaling pathways can also affect amino acid metabolism.
mTOR senses and regulates amino acid metabolism
mTOR is an atypical serine/threonine protein kinase, acting as a convergence point for anabolism and catabolism. Due to differences in structure and function, mTOR complexes are categorized as mTORC1 and mTORC2. mTORC1, which is sensitive to rapamycin inhibition, comprises mTOR, Raptor, mLST8, Tti/Tel2 and suppressive subunits PRAS40 and Deptor. The phosphorylation of PRAS40 and Deptor relieves its inhibition and activates mTORC1 [ 94 ]. 4EBP-1, S6K1 and SREBP are downstream effectors of mTORC1, which are associated with upregulated synthesis as well as poor prognosis in cancer [ 95 ]. mTORC1 is negatively regulated by low energy conditions, hypoxia, and DNA damage. It is also positively regulated by growth factors like insulin/insulin-like growth factor-1 (IGF-1) pathway and receptor tyrosine kinase-dependent Ras signaling. Particularly, when amino acids are abundant, the mTORC1 signaling pathway is positively regulated to transmit signals to facilitate protein synthesis. Conversely, under condition of amino acid insufficiency, the translation of proteins is inhibited to meet energy demand. Considering that cancer cells often exist in a nutrient-deficient environment, mTORC1 is consistently negatively regulated to adapt to metabolic alterations. mTORC2, which is insensitive to rapamycin, consists of mTOR, mSIN1, mLST8, Tti/Tel2 and suppressive subunit Rictor and Deptor. The balance between mTORC1 and mTORC2 orchestrates various metabolic processes, although our understanding of mTORC2 remains limited [ 96 ]. We mainly focus on the function of mTORC1 below.
Amino acid sensors in cytoplasm like sestrins, SAR1B, CASTOR1/2, SAMTOR and LARS sense amino acid levels and thereby regulate mTOR signaling pathway [ 97 ] (Fig. 2a ). The Rag GTPase promotes the localization of mTORC1 to the lysosomal surface and activation. Rag GTPase is further regulated by amino acids through sensors-GATOR2-GATOR1 axis. GATOR1, a negative regulator of mTORC1, interacts with Rag, leading to the inhibition of mTORC1 activity, while GATOR2 modulates mTORC1 activity by inhibiting GATOR1. Sestrins and SAR1B are leucine sensors in cytosolic. In situations of leucine deprivation, they bind to and inhibit GATOR2, a positive regulator of mTORC1 [ 98 , 99 ]. Leucine can bind to sestrins and SAR1B, dissociating GATOR2 from the complex to activate mTORC1. Furthermore, in cases of amino acid deficiency, the general control nonderepressible 2 (GCN2)/ATF4 pathway is activated by uncharged tRNA, leading to the upregulation of sestrins expression to inhibit mTORC1 activity. Similar to sestrins, in arginine-depleted conditions, CASTOR1/2 form either a CASTOR1 homodimer or CASTOR1/2 heterodimer to inhibit GATOR2 and subsequently inhibit mTORC1 activity [ 100 ]. Arginine disrupts the CASTOR1-GATOR2 complex by binding to CASTOR1, and activates GATOR2 to stimulate mTORC1. SAMTOR senses the changes of intracellular methionine concentration in the form of SAM. SAM disrupts the SAMTOR-GATOR1 complex by binding directly to SAMTOR, and reduces the GTPase-activating protein (GAP) activity of GATOR1, which then activates the mTORC1 signaling pathway [ 101 ]. With adequate amino acids, the E3 ubiquitin ligase KLHL22 acts as a positive regulator of mTORC1 by promoting the degradation of GATOR1 [ 102 ]. Leucyl-tRNA synthetase (LARS) senses intracellular leucine and directly activates mTORC1 activation by directly interacting with Rag rather than acting with GATOR1/2 [ 103 ]. LARS mediates the leucylation of RagA/B, which subsequently activates mTORC1 [ 103 ]. Consistently, abovementioned amino acid sensors eventually active Rag, and mediate lysosomal translocation of mTORC1 [ 104 ], a critical step in the activation of the mTORC1. When mTORC1 is activated at lysosomal membrane, autophagy is inhibited and tumorigenesis is promoted.
In addition to above cytoplasm sensors, lysosomal sensors also play a crucial role in mTORC1 activation. Ragulator complex provides a platform for lysosome to tether Rag. The Ragulator interacts with the Rag heterodimers in an amino acid- and v-ATPase-dependent fashion, which finally activates mTORC1 [ 105 ]. SLC38A9, a lysosomal membrane protein with homology to amino acid transporters, participating in the activation of mTORC1 signaling pathway by influencing Rag [ 106 ]. Mechanically, SLC38A9 stimulates the release of GDP from Rag A upon activation by arginine. This action propels Rag into the activated state, subsequently activating mTORC1 [ 107 ]. Notably, SLC38A9 is crucial for the efflux of leucine, glutamine, tyrosine, and phenylalanine generated from lysosomal proteolysis. This efflux is necessary to activate mTORC1 through cytoplasmic sensors [ 108 ]. Thus, lysosomal sensors allow for the integration of lysosomal nutrient information into the regulation of mTORC1 activity. Collectively, amino acids are not only sources for energy and protein synthesis in tumorigenesis, but also act on mTORC1 as signaling molecules.
Meanwhile, mTORC1 can also regulate amino acid metabolism through its downstream signaling effectors (Fig. 2a ). In response to growth signals, mTORC1 activates ATF4 to stimulate enzymes in serine synthesis for folate cycle and purine biosynthesis [ 109 ]. Besides, signaling effector ATF4 also transcriptionally regulates serine transporters SLC1A5 and therefore facilitates the uptake of serine [ 110 ]. Astutely, ATF4 can also regulate the expression of other amino acid transporters such as CHAC1, SESN2, SLC7A11, SLC7A5, SLC7A1, and SLC3A2, and increase amino acid uptake [ 111 ]. Upon the accumulation of glutamine, mTORC1 downregulates miR-23a and miR-23b and subsequently promotes GLS expression to accelerate glutamine catabolism. When glutamine is deficient, mTORC1 represses the transcription of GDH inhibitor SIRT4, prompting glutamine anaplerosis [ 112 ]. mTORC1 also activates arginine catabolism by promoting ODC expression in RAS transformed cells to promote polyamine production and tumor progression. Mechanically, mTORC1 promotes the association between ODC mRNA and mRNA-binding protein, promoting ODC mRNA stabilization and expression [ 113 ]. Besides, positively regulated mTORC1 leads to the stabilization of MYC, which in turn induces ASS1 expression by competing with HIF1α for ASS1 promoter binding sites and therefore promotes arginine expression [ 114 ]. Collectively, mTORC1 regulates amino acid metabolism through multiple signaling effectors, including amino acid transporters, synthetic and catabolic enzymes. mTORC1 downstream signaling molecule MYC also plays extensive regulatory roles in amino acid metabolism.
Myc drives amino acid metabolism
Myc is a proto-oncogene which encodes transcription factor MYC, constitutively expressed in tumor and associated with altered metabolism [ 115 ]. MYC directly regulates key metabolic enzymes expression, resulting in altered metabolism like increased nutrient uptake, enhanced glycolysis, and elevated fatty acid and nucleotide synthesis [ 116 ]. Amino acid metabolism, both EAAs and NEAAs are also regulated by MYC (Fig. 2b ).
As EAAs rely on external resources, corresponding amino acid transporters are crucial and often up regulated in cancer. A positive feedback circuit called MYC-SLC7A5/SLC43A1 is critical in EAAs metabolism in tumor. SLC7A5 imports EAAs in exchange for glutamine export, while SLC43A1 facilitates the import of large neutral essential amino acids (LNEAAs) like BCAAs and tryptophan [ 117 , 118 ]. MYC plays a pivotal role in promoting the transcription of SLC7A5/SLC43A1 and consequently the uptake of EAAs, which in turn activates mTORC1 and accelerates Myc transcription. When SLC7A5/SLC43A1 is blocked and thus amino acids uptake is decreased, the GCN2-eIF2α amino acid stress response pathway will be triggered, leading to the inhibition of MYC mRNA translation. Collectively, this circuit leads to a cascade that affects the entire amino acid metabolic process and oncogene transcription, ultimately promoting tumorigenesis [ 119 ]. Specifically, MYC can enhance tryptophan uptake by upregulating the expression of transporters such as SLC7A5 and SLC1A5. Additionally, it can promote the conversion of tryptophan to kynurenine by inducing arylformamidase (AFMID) within the kynurenine pathway [ 120 , 121 ]. Elevated level of kynurenine has been found to help tumors to evade immune surveillance [ 122 ]. Additionally, Myc upregulates BCAT1, a crucial enzyme in BCAAs catabolism, and increases biosynthesis and promotes tumor development [ 123 ].
In addition to above regulation of EAAs, Myc can also catalyze NEAAs metabolism, like glutamine, proline, and serine. Besides glucose, glutamine could function as major fuels in tumors. MYC prompts the expression of glutamine transporters such as SLC1A5 and SLC38A5 [ 122 ]. In addition, Myc also plays a part in glutamine anabolism and catabolism. MYC can demethylate the promoter of GS, prompting the synthesis of glutamine in cancer cells [ 124 ]. However, Myc also participates in glutamine catabolism through acting on miR-23a/b in some cancers. miR-23a/b can suppress the expression of GLS and then facilitate glutaminolysis in tumor cells [ 125 ]. Collectively, MYC functions differently in different types of tumor cells, and metabolic requirements differential within specific cancer types might dictate the outcome of glutamine metabolism regulated by MYC.
NEAA proline can be synthesized by aldehyde dehydrogenase family 18 member A1 (ALDH18A1, P5CS) and pyrroline-5-carboxylate reductase (PYCR) from glutamine and arginine, thus MYC induced P5CS and PYCR upregulation can promote the proliferation and invasion of cancer cells [ 28 , 123 ]. In this way, the biosynthesis of glutamine-to-proline is prompted, assisting tumor cells to alleviate ER stress and promote proline homeostasis. MYC increases miR-23b to decrease the expression of proline oxidase/proline dehydrogenase (POX/PRODH), leading to the inhibition of proline catabolism [ 126 ]. Thus, MYC can not only promote the synthesis of proline, but also inhibit its break down [ 127 ].
MYC also participates in serine metabolism. Enhanced activity of MYC activates metabolic enzymes in SSP such as PHGDH, PSAT and PSPH, resulting in enhanced serine production [ 123 ]. MYC upregulates SHMT2, which is critical for maintaining cellular redox homeostasis. Under the catalyze of SHMT2, serine metabolized glycine directly participates in glutathione (GSH) production. In this way, MYC promotes GSH synthesis de novo, and then finally resists oxidant to promote tumor progression. MYC upregulates SHMT2, leading to increased production of one-carbon unit 5’m-THF and therefore the generation of NADPH. NADPH also plays a crucial role in maintaining the redox balance by reducing GSSG to GSH [ 128 ]. Collectively, MYC promotes serine synthesis and thereby GSH and NADPH production to resist oxidation and promote tumor growth.
Altered KRAS and amino acid metabolism
KRAS, a frequently mutated oncogenic protein in human cancers, plays a pivotal role in regulating MYC and mTOR activity through RAF-MEK-ERK and PI3K-AKT pathways, respectively [ 129 , 130 ] (Fig. 2c ). The mutation impairs its GTPase activity, leading to persistent activation of downstream signaling cascades, influencing the cellular metabolism and promoting tumor cell proliferation. Amino acid metabolism is also regulated by KRAS, and targeting the metabolic network downstream of KRAS may offer potential avenues for treating KRAS-driven tumors [ 131 ].
KRAS-induced macropinocytosis maintains intracellular glutamine levels. Besides, KRAS regulates enzymes in glutamine catabolism. KRAS downregulates glutamate dehydrogenase1(GLUD1), leading to alterations in glutamine metabolism to produce NEAAs instead of participating in TCA cycle. Besides, KRAS upregulates GOT2 in the mitochondrial and GOT1 in the cytoplasm [ 132 ]. Under such circumstance, glutamine-derived aspartate is converted into OAA by GOT1 in the cytoplasm, and finally converted into pyruvate, resulting in the production of NADPH to maintain the cellular redox balance. Thus, Kras -mutated cells resist cisplatin treatment by upregulating glutamine consumption to maintain a redox state. As upregulated glutamine catabolism is essential for tumor but dispensable for normal cells, inhibiting enzymes like GOT1 in the glutamine catabolic pathway leads to increased levels of reactive oxygen species (ROS), reducing the levels of GSH and ultimately inhibiting tumor growth [ 133 ].
Amino acids in tumor microenvironment
Tumors thrive within the intricate TME, which is complex and continuously evolving including surrounding blood vessels, immune cells, fibroblasts and the extracellular matrix (ECM) [ 134 ]. The bidirectional interaction between tumor and TME takes various forms. Tumors assimilate essential nutrients through macropinocytosis to satisfy its vigorous metabolism. Thus, rapidly proliferating tumor cells compete for relatively scarce nutrients with fibroblasts and immune cells, shaping a commonly hypoxic, acidic, and nutrient-deprived TME. Collectively, the TME promotes tumor progression and immune evasion through nutrients deprivation. Besides, tumors secrete various bioactive molecules that profoundly influence the TME.
Macropinocytosis in TME takes up amino acids
Macropinocytosis is a type of endocytosis that involves the nonspecific uptake of extracellular nutrient molecules like proteins and amino acids [ 135 ] (Fig. 3 ). Even targeted drugs block enzymes in vital biosynthesis process, cancer cells still take up necessary biological materials from TME (e.g., collagen fragments) through macropinocytosis to maintain proliferation. Macropinosomes formation is an actin-dependent process that is initiated upon stimulation of growth factors like colony stimulating factor (CSF-1), epidermal growth factor (EGF), or platelet-derived growth factor. Besides, oncogenic mutations in Kras or PI3K pathway activation can also drive micropinocytosis [ 136 ]. Proteins ingested through macropinocytosis can be decomposed into free amino acids by cellular autophagy for new protein synthesis, or catabolized to generate ATP for energy supply [ 137 ]. Therefore, macropinocytosis enables tumor cells to survive in harsh environment by providing both materials and energy.
Macropinocytosis is a metabolic adaptation to nutrient stress, thus amino acids depletion drives macropinocytosis in cancers to obtain nutrients. Mechanically, a low amino acid environment inhibits the Hippo pathway and promotes membrane localization of EGFR and TGFBRII, triggering macropinocytosis [ 138 ]. When intracellular free amino acids are abundant, mTORC1 precisely controls the utilization of extracellular protein-derived amino acids within lysosomes by repressing macropinocytosed protein catabolism [ 139 ].
Amino acid metabolism and CAFs
TME is hypoxia and devoid of nutrients, which is unfavorable for the survival of cancer cells. Therefore, cancer cells cleverly hold fibroblasts in TME, converting them into CAFs to create an altered homeostasis suitable for tumor growth [ 138 ]. CAFs enhance tumor invasion through their bidirectional interaction with tumor cells and highly secretory phenotype to produce ECM [ 140 ] (Fig. 3 ). CAFs regulate amino acid metabolism to promote multiple processes like tumorigenesis, angiogenesis, and metastasis [ 141 ]. CAFs functional significance in cancer makes them attractive targets for cancer treatment.
CAFs play a critical role in tumor progression by interacting with tumor cells and modulating tumor metabolism through amino acid transport (Fig. 3 ). In the stiff matrix of TME, ECM mechanotransduction re-localizes YAP/TAZ to the nucleus and activates the transcription of GLS1 and SLC1A3 in both cancer cells and CAFs [ 142 , 143 ]. GLS1 promotes the transformation from glutamine to glutamate. However, upregulated-glutamate does not contribute to the TCA cycle in tumors, while in CAFs, glutamate is a major source of carbon for the TCA cycle and produces asparate. SLC1A3 is upregulated simultaneously in tumor cells and CAFs, but functions differently. SLC1A3 in CAFs could provide aspartate to cancer cells, while cancer cells in turn secrete glutamine-derived glutamate through SLC1A3 to CAFs. In such metabolic crosstalk between CAFs and cancer cells, CAF-derived aspartate promotes pyrimidine and protein synthesis to sustains cancer cells proliferation, while cancer cell-derived glutamate balances the redox state of CAFs to promote ECM remodeling. Drugs targeting SLC1A3 can significantly reduce tumor growth due to the close interaction and extensive metabolic remodeling between CAFs and tumors. The amino acid metabolism interaction between CAFs and cancer cells have been recently reviewed in detail [ 141 , 144 ].
CAFs are known to be the most important cells for producing ECM. Collagen, rich in proline, is the main component of ECM, the degradation of which could provide materials and energy to tumor cells. Proline synthesis is sustained through the conversion of glutamate catalyzed by P5C synthase (P5CS) or arginine into pyrroline-5-carboxylate (P5C). P5C serves as the ultimate precursor for proline and, consequently, collagen synthesis. Proline synthesis is upregulated in CAFs and acts as a limiting factor in ECM production. Thus, P5CS deletion decreases collagen and therefore ECM production, which could be rescued with proline supplementation [ 145 ].
Amino acid metabolism and immune cells
TME comprises diverse cell types, including immunosuppressive cells like myeloid-derived suppressor cells (MDSCs), TAMs, and regulatory T cells (Tregs), as well as tumor-antagonizing immune cells like natural killer cells (NKs), T lymphocytes, B lymphocytes and dendritic cells (DCs). Although the tumor-antagonizing immune cells within TME tend to target and kill the cancer cells in the early stage of tumorigenesis, cancer cells eventually escape immune surveillance through various mechanisms, including reprogramming of metabolism.
Tumor associated MDSCs highly express ARG1 (Fig. 3 ), depriving arginine in TME, leading to the lack of arginine in T cells and impairing T cell-mediated anti-tumor immunity [ 146 ]. Besides, MDSCs also highly express NOS-2, not only decompose arginine but also produce NO to impair anti-tumor effect of T cells [ 147 ]. MDSCs express the x c - transporter to import cystine. Thus, in the presence of MDSCs, cysteine is reduced and T cell function is impaired [ 148 ]. Apart from depleting arginine and cysteine, MDSCs also inhibit T cell function by expressing IDO. In tumor site and drainage lymph nodes, IDO overexpression in MDSCs deprived tryptophan in TME, which is necessary for T cell and NK cell proliferation. This eventually leads to T cell stagnating in G0 and NK cell apoptosis [ 149 ]. IDO also generates kynurenine, which has been reported to block T cell proliferation and even induce T cell death as mentioned above. TAM also creates an immunosuppressive TME, which tightly links to glutamine metabolism [ 150 ]. In nutrient-deprived tumor stroma, GS expression in TAM increased to promote glutamine synthesis. Increased GS provides metabolic condition skews macrophages toward an M2-like, pro-metastatic macrophages by providing more glutamine and α-KG [ 151 , 152 ]. Similar to MDSC, TAM also overexpresses IDO and ARG1 to deplete tryptophan and arginine as well as produce kynurenine to suppress T cell function [ 153 ]. It is worth noting that NOS-2 is also upregulated in TAM to reduce arginine and suppress T cell function.
Tumor-antagonizing immune cells function is also suppressed in the TME due to altered amino acid metabolism, thereby influencing the development of tumors. During T cell activation and differentiation, amino acids play dual roles as both energy source and substrates for protein and nucleic acid biosynthesis [ 154 ]. Compared to naive CD8 + T cells, activated CD8 + T cells increases the density of SLC1A5 and SLC7A5 on the cell surface to enhance glutamine uptake. Glutaminolysis underlies asymmetric T cell division, as glutaminolysis and mTORC1 activation is necessary to maintain c-Myc asymmetry. Asymmetric c-Myc levels in daughter T cells would affect their proliferation, metabolism, and differentiation [ 155 ]. Thus, the daughter T cell proximal to the antigen presenting cell is accumulated with glutamine transporters and adopts an effector-like fate, while the distal T cell absence of glutamine transporters assumes a memory-like fate. The differentiation of naive T cells into Th1 or Th17 cells also depends on glutaminolysis. When GLS is inhibited, the number of Th1 cells increases while the number of Th17 cells decreases [ 156 ]. Besides, glutamine limitation also promotes Treg differentiation [ 157 ]. IDO is activated in DCs to deplete tryptophan and produce kynurenine. As previously mentioned, kinase GCN2 is activated by elevated levels of uncharged tryptophan tRNA, triggering CD8 + T cell-cycle arrest and functional anergy [ 158 ]. Moreover, GCN2 activation fosters de novo Treg differentiation and enhances suppressor function in mature Tregs [ 159 ]. On other hand, kynurenine overexpression not only impairs effector T cell, but also promotes the differentiation of Treg through the activation of AhR [ 160 ]. DCs and tumor cells both express SLC38A2 to facilitate glutamine uptake, modulating anti-tumor immunity. Mechanistically, glutamine acts as an intercellular metabolic checkpoint that licenses DCs function in activating CD8 + T cells [ 161 ]. Glutamine promotes the formation of the FLCN-FNIP2 complex, consequently restricting TFEB activity. TFEB acts as a molecular switch, regulating exogenous antigen-presentation pathways through lysosome activation and thus influencing CD8 + T cell activity.
As mentioned above, arginine metabolism regulates immune response and responsible for tumor progression [ 162 ]. Arginine metabolic enzyme NOS-2 directs the polarization of gamma delta T cells towards a pro-tumoral phenotype, thereby inducing metastatic progression [ 163 ]. Asparagine up regulates the LCK signaling pathway to enhance CD8 + T cell activity and thereby inhibit tumor growth. This subverts the previous understanding of the cancer-promoting function of asparagine [ 164 ]. Collectively, above findings reveal that amino acid metabolism in tumors and immune cells plays vital roles in the process of tumorigenesis and development.
Amino acid metabolism and epigenetic modification
Epigenetics refers to heritable phenotype changes that do not involve alterations in the DNA sequence [ 165 ]. It encompasses various mechanisms, including DNA methylation, histone modifications, chromatin remodeling, and small RNA regulation. Amino acid metabolites like SAM and acetyl-CoA are essential substrates for epigenetic modification, while amino acid metabolism also requires epigentic modification of associated metabolic enzymes [ 166 ]. This reciprocal regulatory relationship has a profound impact on tumor progression.
Methylation
SAM is synthesized from methionine and functions as the primary methyl donor in various methylation reactions, mainly DNA methylation (Fig. 4 ). DNA methylation is catalyzed by DNA methyltransferases, which transfer methyl group from SAM to DNA. As a methionine transporter, SLC7A5 is crucial for intracellular methionine concentration. Disruption in methionine concentration is closely associated with cancer development, as DNA hypermethylation usually suppresses tumor suppressor genes expression [ 167 ].
Tumorigenesis is also affected when methylation occurs on histones [ 168 ]. The effect of histone methylation depends on which amino acid residue is methylated and how many methyl groups are involved [ 10 ]. Specifically, arginine methylation on histone can promote transcriptional activation, while lysine methylation on histone involves both transcriptional activation and repression [ 169 ]. When it terms to transcriptional repression, tumors always overexpress methionine transporter SLC43A2 to outcompete T cells for methionine. Methionine deficiency in T cells leads to downregulation of H3K79me2, promoting CD8 + T cell death and inhibiting Th17 polarization [ 170 ]. When it terms to transcriptional activation, downregulation of SLC7A5 in T cells restricts methionine absorption, and alters H3K27me3 deposition at the promoters of key T cell stemness genes. These changes promote the maintenance of a ‘stem-like memory’ state and improve long-term persistence and anti-tumor efficacy [ 171 ]. In conclusion, methionine metabolism regulates genomic architecture, chromatin dynamics and gene expression by dynamically modulating methylation on DNA and histone [ 172 ].
In addition to methionine, other amino acids can also indirectly regulate the level of methylation. Threonine metabolizes into 5-mTHF while serine metabolizes into 5,10-mTHF, playing pivotal roles in cell fate by contributing to SAM synthesis and thereby methylation status. Previous studies reveal that threonine depletion decreased the levels of SAM, leading to decreased tumor growth and increased differentiation [ 173 ]. Transaminase like glycine N-methyltransferase (GNMT) catalyzes the transfer of a methyl group from SAM to glycine to form sarcosine, leading to SAM depletion and S-Adenosyl-homocysteine (SAH) accumulation [ 174 ]. The ratio of SAM/SAH affects methylation [ 175 ]. Therefore, knock down of transaminase like GNMT or the dysregulation of glycine clearly affects the methylation of histone and DNA, affecting gene expression and tumorigenesis [ 176 , 177 ]. Moreover, glycine can also be metabolized to produce 5,10-mTHF to fuel folate cycle, and thereby regulating methionine cycle.
Demethylation
Amino acid like glutamine can be catabolized to produce α-KG. Meanwhile, through α-KG-dependent dioxygenase, the methylation of histone and DNA can be removed [ 178 ]. The family of α-KG-dependent dioxygenases includes Jumonji C-domain lysine demethylases (JmjC-KDMs), ten-eleven translocation (TET) DNA cytosine-oxidizing enzymes, and prolyl hydroxylases (PHDs) [ 179 ].
Normal isocitrate dehydrogenases (cytoplasmic, IDH1; mitochondrial IDH2) catalyze isocitrate to produce α-KG in TCA cycle to fuel tumors. However, IDH is the most frequently mutated metabolic genes in human cancer. When IDH is mutated, it can further catalyze α-KG to 2-hydroxyglutaric acid (2-HG), and 2-HG can competitively inhibit α-KG-dependent dioxygenases [ 180 ]. Thus, mutated IDH inhibits DNA and histone demethylation, leading to a hypermethylation phenotype and promoting the differentiation of cancer stem cells [ 181 ]. In IDH mutant glioblastoma, the CpG island in promoter region of BCAT1 is usually hypermethylated. The inhibition of BCAT1 expression results in the reduction of glutamate and therefore inhibit tumor growth and invasion [ 182 ]. Consistently, individuals diagnosed with IDH WT /TET2 WT myeloid leukemia, α-KG is maintained at normal levels and therefore BCAT1 expression is promoted compared to IDH mut /TET mut myeloid leukemia. Elevated expression of BCAT1 serves as a robust indicator for poorer survival prognosis, which level has been observed significantly escalates upon disease relapse [ 183 ].
Acetylation and deacetylation
Histone acetylation is a process in which acetyl-CoA donates acetyl groups to modify histone proteins catalyzed by histone acetyltransferases (HATs) [ 184 ]. Both acetyl-CoA abundance and the ratio of acetyl-CoA to coenzyme A regulate histone acetylation in cancer [ 185 ]. Histone acetylation leads to attractive force reduction between histones and DNA, promoting a more open chromatin structure that allows for increased transcriptional activity [ 11 , 186 ]. Furthermore, acetyl-CoA derived from amino acid metabolism also plays a role in non-histone protein acetylation and regulates tumor growth.
Acetyl-CoA level and amino acid metabolism affects each other. BCAAs like isoleucine and leucine synthesize acetyl-CoA under the catalyze of transaminase. Other amino acids such as lysine, phenylalanine, tryptophan, and tyrosine also contribute to acetyl-CoA production by forming acetoacetyl-CoA. Similarly, alanine, serine, tryptophan, cysteine, glycine, and threonine synthesize acetyl-CoA through pyruvate formation. Leucine metabolism provides acetyl-CoA to the EP300 acetyltransferase, leading to acetylation of mTORC1 regulator Raptor. This acetylation ultimately results in mTORC1 activation and altered amino acid metabolism [ 187 ]. Isoleucine and leucine catabolism generates acetyl-CoA within the mitochondria. Subsequently, mitochondrial acetyl-CoA must be transported to the cytoplasm and nucleus to regulate gene expression through epigenetic mechanisms [ 188 ].
Deacetylation refers to the removal of acetyl groups from histones, which is typically catalyzed by histone deacetylase (HDAC). Histone deacetylation results in a more condensed form of DNA known as heterochromatin, which is linked to reduced levels of gene transcription. The sirtuin proteins are classified as the class III HDACs depending on NAD + in contrast to the zinc-dependent catalysis by Class I, II, and IV enzymes [ 189 ]. Thus, the tryptophan metabolism and therefore NAD + /NADH ratio exhibit a close correlation with both the acetylation state and energy status, playing important roles in inhibiting gene expression and thereby influencing tumor progression.
Phosphorylation, succinylation, and lactylation
In addition to the aforementioned types of epigenetic modifications, it is noteworthy to consider other post-translational modifications (PTMs), including phosphorylation, succinylation, and lactylation. Each of these PTMs alters the charge and structure of proteins. For instance, modifications to histones affect their binding to DNA, thereby influencing chromatin status and gene expression [ 190 ]. PTMs of enzymes influence their binding to substrates, thereby affecting various metabolic processes.
Serine, threonine as well as tyrosine residues have established as residues for histone phosphorylation [ 191 ]. AMP-activated protein kinase (AMPK) is a sensor of cellular energy status and phosphorylates a variety of cellular substrates include histones [ 192 ]. As a nutrient sensor, AMPK can be activated under a variety of stress conditions. Recent studies indicate that AMPK is activated by the amino acids like alanine [ 193 ], aspartate [ 194 ] and cysteine [ 194 ] and then phosphorylates histones. Histone phosphorylation is linked to various cellular processes, including transcriptional activation, mitosis, DNA repair, and apoptosis [ 195 ]. Phosphorylation also occurs in enzymes related to amino acid metabolism. When phosphorylation occurs in branched chain ketoacid dehydrogenase kinase, it regulates EMT genes and leads to the metastasis of colorectal cancer [ 196 ]. Besides, phosphorylation of GLS is essential for its enzymatic activity and critically contributes to tumorigenesis [ 197 ].
Succinylation is an innovative PTM where a succinyl group is added to a lysine residue. This modification is associated with the catabolism of BCAAs, as the generation of succinyl-CoA serves as an intermediate for succinylation [ 198 ]. The transcriptional characteristics of succinylated histones resemble those observed in acetylated counterparts, thereby activating gene expression [ 198 , 199 ]. Therefore, succinyl-CoA accumulation may enhance cancer initiation and progression by promoting a global succinylation program that favors cancer growth [ 200 ]. Enzymes involved in amino acid metabolism can also be succinylated. Upon oxidative stress, enhanced succinylation of GLS leads to increased oligomerization and activity, thereby promoting glutaminolysis and tumor growth [ 201 ].
Lactylation is also a novel epigenetic modification, in which lactic acid modifies the lysine residues [ 202 ]. Hypoxia is linked to elevated lactate levels originating from heightened glycolysis activity, thereby enhancing intracellular histone lactylation [ 203 ]. Histone lactylation can affect gene expression in tumors and immunological cells, thereby promoting malignancy and immunosuppression. In tumor-infiltrating myeloid cells, a potent accumulation of lactate occurs, leading to the upregulation of methyltransferase-like 3 (METTL3) through lactylation. The increased expression of METTL3 in these cells was correlated with the poor prognosis of patients [ 204 ].
Targeting amino acid metabolism in tumor therapy
Primary strategies in cancer treatment involve targeting disparities between normal and tumor cells in gene expression and phenotype. Metabolism alteration in tumors is a noteworthy phenotype. Amino acids metabolism, being fundamental to vital biological processes, play a crucial role in tumor initiation and progression [ 5 ]. Consequently, targeting amino acid metabolism like amino acid depletion is significant in cancer treatment approach [ 7 ]. Strategies for targeting amino acid metabolism encompass inhibiting amino acid transporters, regulating amino acid biosynthesis and consumption as well as developing amino acids modified dietary (Table 1 ). Given different metabolism in different tumors, developing tailored strategy for distinct tumors becomes imperative.
Amino acid transporters inhibition
Amino acids substantial uptake occurs in tumor cells which overexpress amino acid transporters. Consequently, targeting these transporters to restrict amino acid availability is an effective strategy for inhibiting tumor growth. Drugs targeting amino acid transporters and related enzymes have transitioned from preclinical research to clinical trials and have demonstrated efficacy in some cases [ 205 ]. Glutamine transporters usually highly expressed and associated with poor prognosis as external glutamine is essential for cancer cells to survive [ 206 , 207 ]. This differentiates cancer cells from normal cells, providing a target for tumor therapy. Clinically, Tamoxifen and Raloxifene block glutamine uptake by inhibiting ASCT2 expression in breast cancer to suppress tumor [ 208 ]. Likewise, pharmacological blockade of ASCT2 with V-9302 also resultes in attenuated cancer cell growth and proliferation [ 209 ]. However, therapy targeting ASCT2 alone is not enough, as there are compensation from other transporters like cystine/glutamate antiporter(xCT) to replace ASCT2 to fuel tumors. Cationic amino acid transporters like CAT-1, CAT-2 and CAT-3 for lysine, arginine, and histidine are also dysregulated in tumors and associated with drug resistance. Specifically, CAT-1 expression exhibits a correlation with tumor grade in prostate cancer [ 208 ]. It also plays a pivotal role in promoting growth, proliferation, and metastasis of colorectal cancer and breast cancer [ 210 ]. Upregulated CAT-3 increases arginine uptake and thereby induces tumors to adapt glutamine deprivation [ 39 ]. Downregulation of CATs (CAT-1, CAT-3) through lentiviral transduction with shRNAs or chemical like verapamil shuts down tumor proliferation and induces death [ 211 , 212 ]. Conversely, loss of CAT-2 exacerbates inflammation-associated colon tumorigenesis [ 213 ].
Apart from targeting these tranporters directly to prevent amino acids uptake, these tranporters can also be targeted to deliver drugs. The glutamine macromolecular analog polyglutamine (PGS) can mimic glutamine and selectively ferry siRNA through ASCT2. Therefore, siRNA delivered by PGS targets the growth and survival of certain cancer [ 214 ]. Thus, RNAi in combination with chemotherapy can augment the anti-tumor effect. Apart from ASCT2, SLC6A14 is another amino acid transporter that demonstrates upregulation in numerous cancer types [ 215 ]. Hence, SLC6A14 emerges as a promising target for tumor therapy, with its potential extending to enabling the selective delivery of anticancer drugs to tumor cells. Stealth liposomal systems functionalized with the aspartate-polyoxyethylene stearate conjugate (APS) are developed for transporter-mediated targeted delivery of docetaxel to SLC6A14, resulting in a significantly efficiency for delivering anticancer drugs into cells [ 216 ].
Amino acid metabolism enzymatic inhibition
Amino acid metabolism is also reglulated by biosynthetic enzyme and catabolic enzyme. Numerous pharmacological inhibitors targeting key enzymes in amino acid metabolism undergo extensive research. Asparagine is the most successful and well-documented target in amino acid depletion therapy, especially in acute lymphoblastic leukemia [ 217 ]. Asparlas, a drug frequently used in clinical practice, is developed for the treatment of acute lymphoblastic leukemia. It acts as a long-acting asparagine-specific enzyme to deplete asparagine [ 218 ]. Studies shown that inhibited ASNS or metformin inhibited ETC limits tumor asparagine synthesis, impairing tumor growth in multiple mouse models [ 219 ]. Kras mutation activates the ATF4 signaling pathway through its downstream AKT and NRF2 in NSCLC. When ASNS is inhibited by AKT and the extracellular asparagine is depleted simultaneously, tumor growth can be reduced [ 220 ]. Therefore, ASNS is also a promising therapeutic target for Kras -mutated NSCLC.
Arginine is another targeted amino acid in depletion therapy. As mentioned above, ASS1, a rate-limiting enzyme for synthesizing arginine, is frequently deficient in tumors and thereby arginine is depleted. Meanwhile, enzymes such as arginine deiminase (ADI) and ARG I could also exhaust extracellular arginine by transforming arginine into citrulline and ornithine, respectively. Thus, depleting arginine by ADI-PEG20 or rhArg1-PEG to downregulate arginine level in serum is essential to restrain cancer cells proliferation [ 221 , 222 ]. Since ARG is consistently overexpressed in both cancer cells and MDSC, the ARG inhibitor INCB001158 has the potential to normalize arginine levels within TME, consequently revitalizing T cell functionality [ 223 ]. Simultaneously, the intracellular inhibition of ARG by INCB001158 prevents tumor cells from utilizing arginine for polyamine generation. In combination with immune checkpoint therapy, INCB001158 has been used in patients with advanced/metastatic solid tumors and has shown significant efficacy in a Phase II clinical trial. Additionally, previous study has proved that ASS1 is a new tumor repressor via epigenetic mechanism and downregulated in cancers such as ovarian cancer. Its downregulation has been reported to be associated with advanced tumor stage and susceptible to ADI-PEG20 treatment [ 224 ].
Similarly, as mentioned above, tumors driven by Myc or Kras are highly dependent on exogenous glutamine, and pharmacological inhibitors of GLS like telaglenastat or 968 has shown excellent results in multiple tumors [ 7 ]. Besides, tumors control related enzymes expression to exploit serine and other one-carbon biosynthetic pathway to reduce the dependence on exogenous supply. Drugs targeting the one-carbon metabolic pathway like methotrexate targeting dihydrofolate reductase (DHFR) or 5-fluorouracil targeting thymidylate synthase has been widely used clinically.
Amino acids modified dietary
Cumulating researches have demonstrated that amino acid like methionine, serine, glycine, leucine, glutamine and cysteine restriction plays roles in cancer intervention. Restricted methionine intake prevents tumor growth and metastasis such as TNBC [ 225 ], CRC [ 226 ], and glioma [ 227 ]. Preclinical studies show that restriction of serine and glycine intake significantly inhibits tumor cells proliferation in intestinal cancer and lymphoma mice [ 228 ]. In the leucine-rich diet, metabolism shifts from glycolytic metabolism to oxidative phosphorylation, resulting in a less aggressive tumor phenotype [ 229 ]. Metabolomic analysis reveals that dietary uptake of glutamine effectively increases the concentration of glutamine in tumors and its downstream metabolite, α-KG, without increasing biosynthetic intermediates necessary for cell proliferation. The increase in intratumoural α-KG concentration drives hypomethylation of H3K4me3, thereby suppressing epigenetically-activated oncogenic pathways in melanoma [ 230 ]. However, this effect may be melanoma specific, as glutamine would fuel tumor cell proliferation in most tumors.
Amino acid dietary restriction also affects tumor development by affecting immune cell function. Diet lack of sulfur-containing amino acids like methionine and cysteine downregulates the differentiation of CD4 + T cells towards Tregs and promotes the migration of CD8 + T cells, increasing the ratio of CD8 + /Treg in tumors and thereby enhancing the immune killing function [ 231 ]. Dietary methionine restriction blocks cGAS methylation, releasing cGAS from chromatin and promoting it into cytoplasm [ 232 ]. Cytoplasm cGAS would sense dsDNA and activate the immune response, leading to tumor restriction. The effect of tumor immunotherapy is significantly enhanced by restricting the dose of amino acids in diet.
Summary
During the process of proliferation, tumor cells dysregulate their metabolism to obtain more energy and nutrients. Amino acids are used as fuels and raw materials in protein synthesis as well as signal molecules in energy metabolism, participating in tumor growth process. In addition, amino acid metabolism also intertwines with signaling pathways, TME and epigenetic modification. Amino acids act as pivotal signaling molecules, stimulating signal pathways like mTORC1, MYC and KRAS to drive tumor growth and proliferation. Within the TME, a dynamic interplay occurs as nutrient competition and amino acid availability dictate tumor progression. Tumor and immune cells collaborate to regulate amino acid metabolism, which is crucial for sustaining the metabolic needs of proliferating tumor cells and sculpting the immune response. Moreover, amino acid-derived metabolites like α-KG, acetyl-CoA, NAD + and SAM wield influence over epigenetic regulation. Comprehending this metabolism network provides foundation for targeted therapeutic approaches which aim to disrupt amino acid metabolism.
Compared with normal cells, tumor cells have more emphasis on amino acid requirement and dependence. Therefore, targeting amino acid metabolism to treat tumors is more effective and less damage. However, the field still faces many challenges. Some drugs targeting amino acid metabolism have shown promising effects in animal experiments but realize them clinically is still difficult. Insufficient amino acid depletion may merely maintain tumor cells in a state of cell quiescence, and once treatment is terminated, tumor cells will reappear. The effect of amino acid intervention on tumor development is not only related to tumor type but also depends on external factors even tumor location [ 233 ]. Since most metabolism inhibitors are not effective as single agents, combination therapy may be a more reasonable strategy. Nevertheless, compared with other therapies, amino acid depletion therapy is safer to normal cells. It is hoped that with the wider understanding of amino acid metabolism in tumor and the advancement of metabolic analysis techniques, appropriate patients can be identified. Thus, fully understand of the metabolic flexibility in amino acid metabolism in cancer cells is of great significance, which provides further insight into metabolic dependences and liabilities that can be exploited therapeutically.
Supplementary information
| Supplementary information
The online version contains supplementary material available at 10.1038/s41419-024-06435-w.
Author contributions
Conceptualization: SX Compilation of literature: JC and LC. Article writing and editing: JC and LC. Figures: JC and LC. Supervision: JC and SL. All authors read and approved the final manuscript.
Funding
This work was funded by the National Natural Science Foundation of China (82071789, 31870910), Peak Disciplines (Type IV) of Institutions of Higher Learning in Shanghai, and Key Basic Research Projects (2021-JCJQ-ZD-077-11).
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:57 | Cell Death Dis. 2024 Jan 13; 15(1):42 | oa_package/c5/18/PMC10787762.tar.gz |
||||
PMC10787763 | 38218974 | Introduction
Biocatalysts have long promised to play a major role in synthetic organic chemistry, offering potentially greener routes to high-value chemicals 1 , 2 . However, their highly-valued specificity and selectivity also leads to limitations in their exploitation as they are more difficult to adapt to new substrates than traditional catalysts 3 . The challenge is particularly acute for enzymes that use two substrates as their interactions with the enzyme can often occur at overlapping sites. As a result, few studies have reported the engineering of enzymes to modify both substrate specificities to access new products 4 , 5 .
The formation of asymmetric carbon–carbon bonds is a critical route in organic synthesis to access a variety of innovative and natural compounds that can be used as building blocks in additional synthesis 6 – 8 . Due to their great selectivity and specificity, asymmetric carbon–carbon bond-forming enzymes like transketolase (TK) (EC 2.2.1.1) have significant synthetic potential 9 , 10 . TK catalyses the reversible transfer of a C 2 -ketol unit from donor substrate d -xylulose-5-phosphate to aldose acceptor substrates of either d -ribose-5-phosphate or d -erythrose-4-phosphate in the pentose phosphate pathway of all cells 11 , 12 . This metabolic pathway is critical in the cellular production of nucleotides, amino acids, and fatty acids.
In recent years, TK variants have been demonstrated in the synthesis of complex carbohydrates and other high-value compounds, including l -erythrulose 13 , deoxysugars 14 , N -Aryl Hydroxamic Acids 15 , 7-keto-octuronic acid 16 . At large scale, TK has been used for the production of unusual sugars 17 . Wild-type TK enzymes tend to accept a relatively limited range of hydroxyaldehyde aldol-acceptors with strict (2 R )-specificity 17 , 18 . Donor substrates are similarly limited as they require an oxo group adjacent to the scissile C–C bond, and also typically a C-1 hydroxyl group. Furthermore, they tend to prefer donor substrates with a D-threo configuration of C-3 and C-4 hydroxyls 19 . Wild-type TK enzymes also have a preference for substrates that are phosphorylated due to complementary positively charged side-chains at the active site entrance 19 – 21 . However, this is not an absolute requirement, and TK can accept β-hydroxypyruvate (HPA) as the ketol donor which is advantageous in biocatalysis as it renders the donor half-reaction quasi-irreversible through liberation of CO 2 . This has been particularly useful for E. coli TK which gives up to 30-fold higher specific activity using HPA compared to yeast and spinach orthologs 22 .
To broaden the synthetic capability of transketolase, and to make it more readily adaptable to new reactions, a toolbox of variants is desired that can accept a wider set of acceptor and donor substrates. TKs from various organisms have been subjected to extensive modifications through both rational engineering and directed evolution. E. coli TK has been engineered for improved and reversed enantioselectivity 23 , stability 24 , 25 , and to progressively broaden the aldol-acceptor substrate range to polar aliphatics 21 , non-polar aliphatics 23 . TK from Geobacillus stearothermophilus (TK gst ) has also been similarly engineered using related variants 26 – 29 .
One key goal has been to engineer E. coli TK to accept aromatic aldehydes, as shown through the lineage of engineered variants in Fig. 1 . Initial work used structure-guided engineering of active-site residues to obtain the “3M” variant (S385Y/D469T/R520Q) that accepts 3-formylbenzoic acid (3-FBA) when using HPA as the ketol donor 5 . Crystal structural analysis of 3M and molecular docking of substrates revealed divergent binding modes 30 for the three benzaldehyde analogues, 3-FBA, 4-formylbenzoic acid (4-FBA), and 3-hydroxybenzaldehyde (3-HBA). It was found that 3-FBA and 4-FBA oriented into two distinct binding pockets, whereas 3-HBA could bind into either with no obvious preference 30 , 31 .
The 3M variant has been further engineered to reverse the enatioselectivity 32 , while incorporating non-natural amino acids into the active site led to variants that were more stable, and more active towards 4-FBA and 3-HBA 33 . In the latter study, the variant S385pCNF /D469T/R520Q which incorporated p- cyanophenyl alanine at residue 385, was found to have 43× higher activity towards 3-HBA compared to the original 3M variant 33 . The stability of the 3M variant was also restored to wild-type levels using four previously known stabilising mutations to target residues within a network that was dynamically-correlated with active site residues 34 . The new “7M” variant, H192P/A282P/I365L/ S385Y/D469T /G506A /R520Q retained its activity with 3-FBA but also improved the activity with 4FBA by 3×.
Another recent focus has been to engineer variants that can simultaneously use aromatic aldehydes with pyruvate as the ketol donor instead of HPA. This would give synthetic access to precursors to drugs such as spisulosine and phenylpropanolamine, and to analogues of phenylacetylcarbinol (PAC), an important pharmaceutical intermediate (Scheme 1 ). Pyruvate is only different from HPA by the absence of the C-3-hydroxyl group. However, it is unable to serve as a donor substrate in the TK process, highlighting the crucial significance of the hydroxyl group. The E. coli TK variant “6M” ( H100L /H192P/A282P/I365L/D469T/G506A) was developed by introducing H100L and the previously known propanal-accepting D469T mutation into the WT-stabilising “4M” variant (H192P/A282P/I365L/G506A) 35 to enable pyruvate acceptance as the donor along with propanal as the acceptor. A similar result was achieved recently with glycolaldehyde as the acceptor, by using a related pyruvate-accepting enzyme 1-deoxy- d -xylulose-5-phosphate synthase (DXS) to guide the engineering of TK gst to accept pyruvate and other aliphatic analogues 36 , 37 .
Out of a series of variants, the “6M” E.coli TK variant was found previously to have the highest activity towards 3-FBA when using pyruvate, yielding just enough material in 48 h to isolate and characterise the product by NMR and LC/MS 4 . Interestingly, combining 3-FBA accepting mutations S385Y and R520Q from “3M”, into pyruvate accepting variants such as 4M/D469T/H473N and 4M/D469T/H473N/R520Q, led to losses in activity for the 3-FBA and pyruvate reaction 4 . The S385 and R520 residues both also form part of the entrance to the active site, and so their mutagenesis can impact not only on 3-FBA binding and orientation, but also may control the passage of pyruvate on its way towards the TPP cofactor.
Thus, the sites that influenced donor and acceptor substrate acceptance appeared to overlap or interact in a way that made combining beneficial mutations from each unpredictable. Furthermore, despite achieving success in obtaining activity with aromatic substrates and pyruvate, the activities obtained from the variants to date remained very low compared to those achieved with hydroxypyruvate (HPA) (Table 1 ).
To date mutations from “3M” had not been explored within the “6M” variant. Therefore, in this new study, we created a small library that recombined the 6M E. coli TK variant (H192P/A282P/I365L/G506A/D469T/H100L) with R520Q and a range of S385 mutations, including non-natural amino acids. These were then screened against reactions with pyruvate as donor and the three aromatic acceptor aldehydes 3-FBA, 4-FBA and 3-HBA. This led to new variants with significantly improved activities and higher levels of conversion. Kinetic analysis alongside molecular docking of the substrates into key variants revealed possible reasons for improved activities. | Materials and methods
Materials
All chemicals were obtained from Sigma-Aldrich, Merck, Germany. Non-natural amino acid, p-Amino l -phenylalanine (pAMF) and p-Cyano- l -Phenylalanine (pCNF) were obtained from Chem Cruz (Texas, USA).
Qiagen Ni-NTA column followed by Amicon ultra 10KD filter unit were used to purify and concentrate the proteins. Transketolase gene carrying plasmid pQR791 and suppression plasmid, pUltra were obtained from our lab master stock. For reverse phase HPLC (C18 HPLC) Agilent ACE5 C18 reverse-phase column (150 × 9 × 4.6 mm) was used in departmental Agilent 1200 HPLC, UPLC-SQD2 was used for Mass spectrometric analysis and NMR.
Methodologies
Preparation of a TK mutant library
TK variants were prepared by site directed mutagenesis using the Quick-Change site directed mutagenesis kit (Agilent technologies). Variants were obtained by sequential mutation from the 6M TK variant (H100L/H192P/A282P/I365L/D469T/G506A). A detailed list of mutants and the primers used is shown in Table S1 (supplementary information) . This library consists of 8 variants in total. For the variants containing one of the two non-natural amino acids (nnAA) pAMF and pCNF at residue 385, we mutated the site to amber stop codon TAG. The nnAA incorporation was carried out by a standard amber suppression technique where the pUltra plasmid, pULTRA-CNF was a gift from Peter Schultz (Addgene plasmid # 48215) 40 carrying the pAMF/pCNF specific aminoacyl synthetase gene, was co-transformed with the pQR729 plasmid for TK gene expression, into an amberless E. coli strain 41 TK was then overexpressed using media supplemented with pAMF or pCNF. The nnAA incorporation was confirmed by ESI–MS experiments.
Transketolase purification
The cell pellets from 100-mL cell cultures for each variant or wild type were resuspended using cofactor solution containing 2.4 mM ThDP, 9 mM MgCl 2 and 50 mM Tris/HCl, pH 7.0. The suspended cells were then transferred to a 50 mL falcon tube for cell lysis by sonication, then centrifuged at 18,900 g for 15 min to collect the cell lysate supernatant. The protein expression level was assessed by 12% SDS/PAGE. TK with nnAAs were overexpressed using a 1 mM IPTG induction. The TK was C-terminally fused to a His6 tag, and so TK variants were purified using a Ni–NTA column. Purity was assessed by 12% SDS PAGE and the concentration measured using the Bradford method. Eluted holo-TK was buffer exchanged into 2.4 mM ThDP, 9 mM MgCl 2 and 50 mM Tris/HCl, pH 7.0, and diluted to 0.15 mg/mL or 0.2 mg/mL prior to enzyme activity reactions.
TK activities with aromatic aldehydes
Reactions between aromatic aldehydes and sodium pyruvate were initiated by adding 1 volume of substrate solution containing 150 mM aldehyde, 150 mM Na pyruvate, 50 mM Tris/HCl, pH 7.0, 2.4 mM ThDP, 9 mM MgCl 2 , to 2 volumes of the 0.15 mg/mL enzyme solution, to give final concentrations of 50 mM aromatic aldehydes and 50 mM sodium pyruvate. After reaction at 25 °C for 24 h, 20 μL of reaction mixture was transferred to 380 μL of 0.1% TFA to quench the reaction. Samples were then analysed by RP-HPLC with an ACE5 C18 reverse-phase column (150 × 9 × 4.6 mm), UV detection at 210 and 250 nm, and a mobile phase at a flow rate of 1.0 mL/min, starting with 70% of A (0.1% TFA) and 30% of B (100% acetonitrile) for 7 min.
Enzyme kinetics
Holo-TK variants at 0.2 mg/mL in 2.4 mM ThDP, 9 mM MgCl 2 and 50 mM Tris/HCl, pH 7.0 were combined 1:1 with 2× substrate solutions. Kinetic parameters were obtained at 50 mM pyruvate and a range of 25–150 mM 3-FBA, in final conditions of 0.1 mg/mL holo-TK, 50 mM Tris–HCl pH 7.0, 2.4 mM ThDP, 9 mM MgCl 2 , at 21 °C. Aliquots of 50 μL were quenched at various times within 24 h by adding 350 μL of 0.1% (v/v) TFA. Triplicate reactions were monitored using RP-HPLC as above. All data were fitted by non-linear regression in OriginPro9.0, using the Michaelis–Menten equation to determine the K m and k cat of wild-type TK and the variants.
NMR analysis
Product fractions were collected from multiple C18 RP-HPLC runs, pooled and concentrated using a vacuum centrifuge until > 1 mg was obtained. This was dissolved in D 2 O within NMR tubes. 1 H-NMR spectra were recorded on Bruker Avance 400, 500, 600 and 700 MHz spectrometers at 25 °C, using the residual protic solvent stated as the internal standard. Chemical shifts are quoted in p.p.m. to the nearest 0.01 p.p.m.
Sample preparation for capillary LC–MS/MS
For mass spectrometry, the C18 HPLC mobile phase was changed to 0.1% formic acid and 100% acetonitrile, and product fractions collected, concentrated using vacuum centrifugation, and then solid phase extraction with a Waters C 18 100 mg cartridge (Waters, UK) used to remove trace salts. The C 18 cartridge was preconditioned following application of 100 μL of each sample, then washed with 1 mL of water and the sample components then desorbed with 1 mL methanol by gravity. 20 μL of each sample was then injected into the LC–MS instrument.
Capillary LC–MS/MS analysis
The LC–MS/MS system consisted of a Vanquish Liquid Chromatography LC system (Thermo Fisher Scientific, UK) coupled to a heated electrospray (HESI) probe connected to a Q Exactive mass spectrometer (Thermo Fisher Scientific, UK). The chromatographic separation was achieved on a Hypersil GOLD reverse-phase column (100 mm long, 2.1 mm internal diameter, particle size 1.9 μm) at a flow rate of 200 μL/min using a mobile phase A was 0.1% formic acid in water and mobile phase B was 0.1% formic acid in 80% acetonitrile. An eight minute gradient was as follows: After 1 min with 5% B, the proportion of B was raised to 95% B over the next 5 min, and maintained at 95% B for a further 1 min, after within 6 s the proportion of B was changed to 5% for 1.9 min to equilibrate the Hypersil C18 column. The LC column eluent was directed to the HESI source of the Q Exactive mass spectrometer. The HESI probe was operated with a sheath gas flow at 25 psi, an auxiliary gas flow at 10 psi, a spray voltage at 3.50 V, a capillary temperature at 320 °C, a S-lens RF level at 55, an auxiliary gas heater temperature was set at 50 °C. The Q Exactive mass spectrometer was operated in a positive ion mode. MS data was acquired using a data-dependent acquisition mode, and operated at 70,000 mass resolution (full width at half-maximum height, FWHM definition), and the top eight most abundant singly charged ions in the 80–800 m/z range were selected for MS/MS. The automatic gain control for the Q Exactive was set to 300,000 ions, and the automatic gain control for the MS/MS in the ion trap was set at 25,000 ions. For MS/MS, the isolation width was set at 1.2 amu and the collision energy was 30%. MS/MS scans were acquired at a mass resolution of 17,500 at 200 m/z. Three MS/MS microscans for each precursor were accumulated. Dynamic exclusion was enabled, and selected ions were excluded for 20 s before they could be selected for another round of MS/MS.
Computational molecular docking
The structures of the variants H192P/A282P/I365L/G506A/H100L/D469T/S385Y (TK1_7M), H192P/A282P/I365L/G506A/H100L/D469T/S385Y/R520Q (TK2_8M), H192P/A282P/I365L/G506A/H100L/D469T/S385F (TK3_7M) and H192P/A282P/I365L/G506A/H100L/D469T/S3854pCNF/R520Q (TK5_8M) were obtained using SWISS-MODEL and 5HHT.pdb as the template, and the S3854pCNF mutation introduced using the SwissSidechain Pymol plugin. The enamine-ThDP intermediate present in previously docked crystal structures was aligned into the WT and triple mutant (5HHT). The 3D conformation of 3-FBA was obtained from the PubChem (NIH) database and converted from a .sdf into a .pdb file using Open Babel. Autodock Vina was used to dock 3-FBA into WT and mutant TKs with the grid centred at − 11.25 Å, 25.858 Å and 40.198 Å (in 5HHT.pdb) and the exhaustiveness of 24 42 . The grid sizes of 30 Å × 30 Å × 30 Å, was used for the docking of 3-FBA and includes the entire active-site while omitting other surface hydrophobic pockets. Three replicate dockings were carried out and the conformations with lowest energy were selected for analysis. To get a comparative view, TK-WT and the four variant structures were also docked with 3-FBA in Autodock 4.2 43 . The explorable space for docking was defined using the same grid as above. For each search, a Lamarckian genetic algorithm was run 200 times with a maximum number of 25 million energy evaluations. The ligand (3-FBA) was flexible whereas the enzyme remained rigid. Resulting poses were analysed and checked for molecular interaction (hydrogen bonding, hydrophobic interaction, π-π stacking) in PyMOL Molecular Graphics System (Schrödinger, USA) and Protein Plus (TuHH). | Results and discussion
Our goal was to find E. coli TK variants with improved activity towards aromatic acceptor aldehydes 3-FBA, 4-FBA and 3-HBA and with pyruvate as the donor, starting from the best variant to date, 6M (H192P/A282P/I365L/G506A/D469T/H100L). The 3M variant (S385Y/D469T/R520Q) was previously found to accept aromatic aldehydes, but only with hydroxypyruvate as donor. Previous attempts to introduce S385Y into 4M/D469T/H473N or 4M/D469T/H473N/R520Q actually led to a loss in activity towards 3-FBA with pyruvate 4 . However, this mutation was not previously tested in 6M. Other mutations of S385, including non-natural substitutions within 3M (S385X/D469T/R520Q) were also found previously to improve the activity towards aromatic aldehydes, but again only when using HPA as donor substrate 33 . The R520Q mutation has had a stabilising effect in several variants, often also improving activity 5 , 31 , 38 , but it did not improve the activity towards 3-FBA when inserted into 4M/D469T/H473N previously 4 . Therefore, we were interested now to explore whether any combinations of the various S385X mutations and also R520Q, could improve the activity of 6M towards aromatic aldehydes and pyruvate.
TK variant activities with aromatic substrates
A series of TK variants were generated, expressed, purified and then evaluated for their activities towards three aromatic aldehyde acceptor substrates at 50 mM, with 50 mM pyruvate as the donor substrate. The variant names, mutations and initial levels of conversion after 24 h, based on product peak areas as a fraction of total peak area for substrate and product, are shown in Table 2 .
Among all eight variants TK-1, TK-2, TK-3 and TK-5C showed the greatest conversion to product after 24 h, and compared to no conversion with TK-WT. TK-3 and TK-5C also showed the greatest conversion when using 4FBA. The level of conversion for 3HBA remained relatively low with all variants. In previous work, the 6M variant (4M/H100L/D469T) gave only a 2.5% conversion of 3-FBA after 24 h. This could be increased at 1.3 mg/mL enzyme to 46.8%, but was still lower than the 62.2% achieved with the new TK-3 variant. Given the greatest activities with 3-FBA we focused further characterisation on that substrate, selecting the four most prominent variants TK-1, TK-2, TK-3 and TK-5C. We also used the conversion of 3-FBA with each TK variant to isolate the product and confirm its identity as previously 4 using a combination of LC–MS and NMR ( Supplementary information, Figs. S1, S2, S3 ).
Kinetic and stability studies of TK variants with substrate 3-FBA and pyruvate
The kinetic parameters for TK-1, TK-2, TK-3 and TK-5C were determined at 0.1 mg/mL enzyme and 50 mM pyruvate by varying the concentration of 3-FBA from 0 to 150 mM, and monitoring reactions in triplicate for up to 24 h. The kinetic parameters are shown in Table 3 , alongside the specific activity at 50 mM 3-FBA, and the conversion at 24 h (taken from Table 2 ). For comparison, the parameters determined previously for 6 M are also shown, although the K m for 3-FBA was not determined as that experiment varied pyruvate instead. It was not possible to obtain parameters for WT-TK as no activity was detected and so rates and rate constants are set to 0, while the K m cannot be determined.
Compared to 6M, all four variants demonstrated significant increases in specific activity, V m and k cat . The specific activity of TK-1 in particular was 400× higher than for 6M, while the V m was 900× greater. While the k cat values for TK-1 and TK-3 were 2–3× higher than for TK-3 and TK-5C, their K m values were correspondingly higher. The resulting k cat / K m values were fairly similar for TK-1, TK3 and TK-5C at 0.52–0.6/s mM, with TK-2 at a lower value of 0.2/s mM. Clearly the two larger K m values (for TK-1 and TK-2) exceeded the studied range of 3-FBA (up to 150 mM), resulting in the larger errors on their estimated values.
The stability of the variants was also measured using thermal scanning fluorimetry with intrinsic fluorescence as the probe. TK is already known to aggregate rapidly upon unfolding 44 , but comparisons can still be made between variants with consistent ramping protocols, and so an apparent thermal transition mid-point ( T m ) is estimated. The T m for wild-type TK was 57.8 °C, and consistent with a previous measurement of 58.3 °C under similar conditions (0.1 mg/mL, 25 mM Tris–HCl, pH 7.5, 0.5 mM MgCl 2 , 0.05 mM TPP) 44 .
The variants all had increased T m values, reaching 65.1 °C, and 63.7 °C for TK-3 and TK-5C respectively. TK-1 and TK-2 gave more modest increases to 59.2 °C and 61.4 °C respectively. It was previously shown that four of the mutations used within 6M increased the T m over wild-type TK by 3.6 °C 35 , whereas their introduction into 3M increased the T m by 3 °C 34 . These earlier experiments suggest that a significant part of the 1.4–7.3 °C increased stability in the new variants was likely to have already been present in 6M.
Overall, TK-3 was the most stable, had the lowest K m for 3-FBA and performed well in terms of catalytic efficiency ( k cat / K m ) and conversion (62%). However, for biocatalysis under conditions where substrate could be added in excess over K m , then the k cat and stability are more important factors. With that perspective, the variants TK-1 and TK-2 might be preferred for their elevated activity despite the slightly lower T m values than TK-3.
It is worth comparing the impact of R520Q and S385X mutations here with their addition to previous variants. R520Q in TK-2 increased the T m over TK-1 by 2.2 °C with no adverse impact on activity, consistent with its stabilising effects in previous studies. S385Y and S385F each had a significant impact on activity when added to 6M in the current study, leading to respective 630× and 200× increases in k cat for the reaction with 3-FBA and pyruvate. This contrasts with the previous impact of adding S385Y into the D469T/R520Q variant (to form “3M”), which improved the catalytic efficiency towards 3-FBA and HPA by significantly decreasing the K m for 3-FBA. Furthermore, when the S385Y mutation was incorporated into 4M/D469T/H473N and 4M/D469T/H473N/R520Q, it led to a loss of activity towards 3-FBA with pyruvate 4 . The contextual mutations were clearly very important for the effects of S385Y. S385F gave a lower k cat , but also lower K m for 3-FBA, when compared to S385Y, which mirrored the same result in S385X/D469T/R520Q variants when tested on 3-HBA and hydroxpyruvate 33 .
Computational docking of 3-FBA into TK-WT, 7M and 8M
To provide some structural insights into the effects of mutations on k cat and K m , we used computational docking of the 3-FBA substrate into the active site of the TK WT, TK-1 (7M), TK-2 (8M), TK-3 (7M) and TK-5C (8M) enzymes containing the enamine intermediate expected after prior reaction of pyruvate with the ThDP cofactor. Docking of the 3-FBA was performed in Autodock 4.2 which allowed both pose clustering (based on energy and RMSD for each pose) and an analysis of the relative populations of poses obtained for each cluster, from a total of 30. To analyse each pose and cluster in more detail, we calculated the distance, d, between the TPP enamine carbanion and the C-atom of the –CHO group of 3-FBA, to approximate the likelihood that a given pose was catalytically productive (not accounting for dynamics or angle of attack).
The poses for each variant are mapped in Fig. 2 , and docking parameters shown in Table 4 . The calculated cluster-averaged binding energies were all in the range − 3.2 to − 4.4 kcal/mol, indicating broad energetic similarity between substrates in all clusters such that all could potentially provide sufficient binding for catalysis. However, not all binding orientations and positions would necessarily be productive for catalysis, and significant populations of non-reactive complexes in the actual enzyme could contribute to either an increase in the apparent K m or even lead to substrate inhibition. The maximum RMSD within each cluster varied from 0.1 to 1.0 Å, indicating a small combined degree of variance in substrate position, orientation and conformation within each cluster.
The docking produced two or three distinct clusters for each of the variants, but only a single cluster for WT (Table 4 ). We also analysed all poses together by visualising within aligned structures to determine structural similarities and differences between clusters from each variant. This revealed that across the four mutant variants there were actually only four clusters in total, with a few outlying poses, and that these were all distinct from the only cluster (cluster 0) in WT. Cluster 0, found only in WT, and for all WT poses, was unreactive with 7.5 < d < 8 Å. Results from Autodock Vina were consistent with Autodock 4.2 (Average d = 7.2 Å), although.
Vina also identified a rare pose with d = 4.8 Å suggesting catalysis might be theoretically possible in WT-TK but achieving the correct substrate binding may be frustrated due to binding predominantly into a non-productive location.
Of the four clusters in variants, cluster 4 was unreactive with 6.1 < d < 7.0 Å. TK2 contained the fewest potentially reactive poses (23%), with 67% in cluster 4. A further 10% for TK2 were in cluster 2b, a subcluster of cluster 2 but with a 180° relative rotation in the plane of the 3FBA ring leading to d > 9 Å. Clusters 1, 2 and 3 could be considered as “reactive”. Cluster 1 was preferred by TK1 and TK3, (even though TK3 found a single pose in cluster 2 with a lower binding energy). TK1 and TK3 each retained the wild-type arginine at residue 520. By contrast, Cluster 2 was preferred by TK2 and TK5, and these two variants contained the R520Q mutation.
To test the relationship between population, distance and experimentally determined catalytic efficiency, we created a score (Table 4 ) that weighted distance by population of poses, and linearly scaled between two cut-off values such that a distance of ≤ 4 Å scored 100% and ≥ 8 Å scored 0%. Given the limitations of docking methods this analysis does not account for dynamics or specific alignment of orbitals in favourable ways, such as at the Bergi-Dunitz angle for nucleophilic attack of a carbonyl 45 . However, intriguingly the score scaled well with k cat / K m (Fig. 3 ), confirming that control of the distance to enamine, and population distributions were both major factors in the observed kinetic differences between the variants.
How do mutations influence the binding populations?
Each pose was mapped to the active site structure by aligning them via their respective TPP-enamine cofactors, to then identify common interactions with the enzyme, and the potential role of mutations (Fig. 4 ). In previous studies, the WT enzyme was found to be inactive on the aromatic aldehyde substrate 3FBA 30 , 31 . The earlier 3M variant which included the S385Y and D469T mutations, first introduced activity towards 3-FBA in reactions with hydroxypyruvate 4 , 5 . A crystal structure of 3M and subsequent computational substrate docking of 3FBA, 4FBA and 3HBA, revealed that D469T mutation gave a hydrophobic surface for more favourable interactions with the nonpolar acceptor substrates 30 . The S385Y mutation meanwhile created opportunities for π–π stacking with the new substrates, but also sterically filled the active-site space to create a more defined binding pocket. Finally, the R520Q mutation decreased the potential for interaction with the carboxylate of 3-FBA or 4-FBA. Interestingly the docking in 3M previously showed two clusters arising from the formation of two distinct binding pockets with the aromatic ring positioned on either side of D469T. 3-FBA and 4-FBA oriented into different binding pockets, whereas 3-HBA could bind into either with no obvious preference. For 3-FBA the carboxylate group was oriented towards R91 to form a salt bridge interaction.
In the present study, docking of 3-FBA into the four key variants and WT, again revealed multiple potentially reactive binding modes, dominated by cluster 1 (Fig. 4 C and E) and cluster 2 (Fig. 4 D and F). Interestingly, the 3-FBA orientation observed in 3 M was no longer populated in the current variants, none of which made use of the previous interaction between 3-FBA and R91. Instead, the carboxylate groups remained oriented towards R520 in TK-1 (Fig. 4 C) and TK-3 (Fig. 4 E) as might be expected, but surprisingly also towards R520Q in TK-2 (Fig. 4 D) and TK-5C (Fig. 4 F). This orientation in TK-2 and TK-5C was supported through interactions with nearby H461 and to a lesser degree with R358, which were both also found in the WT docking of 3-FBA. However, the key difference to 3M appears to be the H100L mutation in the current variants. The imidazole sidechain of H100 interacts directly with the guanidinium of R91 in both WT-TK and 3M, and so the H100L mutation would directly modify the pKa for R91, most likely disfavouring ionisation and formation of a salt bridge to the 3-FBA substrate. This would explain the loss of the 3-FBA substrate orientation via R91 observed previously in 3M docking.
When comparing cluster 1 favoured by TK-1 and TK-3, to cluster 2 (preferred by TK2 and TK5), the key change was the R520Q mutation in TK2 and TK5. This mutation pulls 3-FBA slightly further away from the TPP-enamine (see distance d in Table 4 ), but also pulls it across the surface created by the D469T sidechain, thus enabling the aromatic ring face to rotate approximately 90 degrees and fit better into the cluster 2 binding pocket. By contrast, rotation of the 3-FBA ring face in TK-1 and TK-5, while maintaining the interaction with R520 would result in an unfavourable steric clash with the side chain of D469T. | Results and discussion
Our goal was to find E. coli TK variants with improved activity towards aromatic acceptor aldehydes 3-FBA, 4-FBA and 3-HBA and with pyruvate as the donor, starting from the best variant to date, 6M (H192P/A282P/I365L/G506A/D469T/H100L). The 3M variant (S385Y/D469T/R520Q) was previously found to accept aromatic aldehydes, but only with hydroxypyruvate as donor. Previous attempts to introduce S385Y into 4M/D469T/H473N or 4M/D469T/H473N/R520Q actually led to a loss in activity towards 3-FBA with pyruvate 4 . However, this mutation was not previously tested in 6M. Other mutations of S385, including non-natural substitutions within 3M (S385X/D469T/R520Q) were also found previously to improve the activity towards aromatic aldehydes, but again only when using HPA as donor substrate 33 . The R520Q mutation has had a stabilising effect in several variants, often also improving activity 5 , 31 , 38 , but it did not improve the activity towards 3-FBA when inserted into 4M/D469T/H473N previously 4 . Therefore, we were interested now to explore whether any combinations of the various S385X mutations and also R520Q, could improve the activity of 6M towards aromatic aldehydes and pyruvate.
TK variant activities with aromatic substrates
A series of TK variants were generated, expressed, purified and then evaluated for their activities towards three aromatic aldehyde acceptor substrates at 50 mM, with 50 mM pyruvate as the donor substrate. The variant names, mutations and initial levels of conversion after 24 h, based on product peak areas as a fraction of total peak area for substrate and product, are shown in Table 2 .
Among all eight variants TK-1, TK-2, TK-3 and TK-5C showed the greatest conversion to product after 24 h, and compared to no conversion with TK-WT. TK-3 and TK-5C also showed the greatest conversion when using 4FBA. The level of conversion for 3HBA remained relatively low with all variants. In previous work, the 6M variant (4M/H100L/D469T) gave only a 2.5% conversion of 3-FBA after 24 h. This could be increased at 1.3 mg/mL enzyme to 46.8%, but was still lower than the 62.2% achieved with the new TK-3 variant. Given the greatest activities with 3-FBA we focused further characterisation on that substrate, selecting the four most prominent variants TK-1, TK-2, TK-3 and TK-5C. We also used the conversion of 3-FBA with each TK variant to isolate the product and confirm its identity as previously 4 using a combination of LC–MS and NMR ( Supplementary information, Figs. S1, S2, S3 ).
Kinetic and stability studies of TK variants with substrate 3-FBA and pyruvate
The kinetic parameters for TK-1, TK-2, TK-3 and TK-5C were determined at 0.1 mg/mL enzyme and 50 mM pyruvate by varying the concentration of 3-FBA from 0 to 150 mM, and monitoring reactions in triplicate for up to 24 h. The kinetic parameters are shown in Table 3 , alongside the specific activity at 50 mM 3-FBA, and the conversion at 24 h (taken from Table 2 ). For comparison, the parameters determined previously for 6 M are also shown, although the K m for 3-FBA was not determined as that experiment varied pyruvate instead. It was not possible to obtain parameters for WT-TK as no activity was detected and so rates and rate constants are set to 0, while the K m cannot be determined.
Compared to 6M, all four variants demonstrated significant increases in specific activity, V m and k cat . The specific activity of TK-1 in particular was 400× higher than for 6M, while the V m was 900× greater. While the k cat values for TK-1 and TK-3 were 2–3× higher than for TK-3 and TK-5C, their K m values were correspondingly higher. The resulting k cat / K m values were fairly similar for TK-1, TK3 and TK-5C at 0.52–0.6/s mM, with TK-2 at a lower value of 0.2/s mM. Clearly the two larger K m values (for TK-1 and TK-2) exceeded the studied range of 3-FBA (up to 150 mM), resulting in the larger errors on their estimated values.
The stability of the variants was also measured using thermal scanning fluorimetry with intrinsic fluorescence as the probe. TK is already known to aggregate rapidly upon unfolding 44 , but comparisons can still be made between variants with consistent ramping protocols, and so an apparent thermal transition mid-point ( T m ) is estimated. The T m for wild-type TK was 57.8 °C, and consistent with a previous measurement of 58.3 °C under similar conditions (0.1 mg/mL, 25 mM Tris–HCl, pH 7.5, 0.5 mM MgCl 2 , 0.05 mM TPP) 44 .
The variants all had increased T m values, reaching 65.1 °C, and 63.7 °C for TK-3 and TK-5C respectively. TK-1 and TK-2 gave more modest increases to 59.2 °C and 61.4 °C respectively. It was previously shown that four of the mutations used within 6M increased the T m over wild-type TK by 3.6 °C 35 , whereas their introduction into 3M increased the T m by 3 °C 34 . These earlier experiments suggest that a significant part of the 1.4–7.3 °C increased stability in the new variants was likely to have already been present in 6M.
Overall, TK-3 was the most stable, had the lowest K m for 3-FBA and performed well in terms of catalytic efficiency ( k cat / K m ) and conversion (62%). However, for biocatalysis under conditions where substrate could be added in excess over K m , then the k cat and stability are more important factors. With that perspective, the variants TK-1 and TK-2 might be preferred for their elevated activity despite the slightly lower T m values than TK-3.
It is worth comparing the impact of R520Q and S385X mutations here with their addition to previous variants. R520Q in TK-2 increased the T m over TK-1 by 2.2 °C with no adverse impact on activity, consistent with its stabilising effects in previous studies. S385Y and S385F each had a significant impact on activity when added to 6M in the current study, leading to respective 630× and 200× increases in k cat for the reaction with 3-FBA and pyruvate. This contrasts with the previous impact of adding S385Y into the D469T/R520Q variant (to form “3M”), which improved the catalytic efficiency towards 3-FBA and HPA by significantly decreasing the K m for 3-FBA. Furthermore, when the S385Y mutation was incorporated into 4M/D469T/H473N and 4M/D469T/H473N/R520Q, it led to a loss of activity towards 3-FBA with pyruvate 4 . The contextual mutations were clearly very important for the effects of S385Y. S385F gave a lower k cat , but also lower K m for 3-FBA, when compared to S385Y, which mirrored the same result in S385X/D469T/R520Q variants when tested on 3-HBA and hydroxpyruvate 33 .
Computational docking of 3-FBA into TK-WT, 7M and 8M
To provide some structural insights into the effects of mutations on k cat and K m , we used computational docking of the 3-FBA substrate into the active site of the TK WT, TK-1 (7M), TK-2 (8M), TK-3 (7M) and TK-5C (8M) enzymes containing the enamine intermediate expected after prior reaction of pyruvate with the ThDP cofactor. Docking of the 3-FBA was performed in Autodock 4.2 which allowed both pose clustering (based on energy and RMSD for each pose) and an analysis of the relative populations of poses obtained for each cluster, from a total of 30. To analyse each pose and cluster in more detail, we calculated the distance, d, between the TPP enamine carbanion and the C-atom of the –CHO group of 3-FBA, to approximate the likelihood that a given pose was catalytically productive (not accounting for dynamics or angle of attack).
The poses for each variant are mapped in Fig. 2 , and docking parameters shown in Table 4 . The calculated cluster-averaged binding energies were all in the range − 3.2 to − 4.4 kcal/mol, indicating broad energetic similarity between substrates in all clusters such that all could potentially provide sufficient binding for catalysis. However, not all binding orientations and positions would necessarily be productive for catalysis, and significant populations of non-reactive complexes in the actual enzyme could contribute to either an increase in the apparent K m or even lead to substrate inhibition. The maximum RMSD within each cluster varied from 0.1 to 1.0 Å, indicating a small combined degree of variance in substrate position, orientation and conformation within each cluster.
The docking produced two or three distinct clusters for each of the variants, but only a single cluster for WT (Table 4 ). We also analysed all poses together by visualising within aligned structures to determine structural similarities and differences between clusters from each variant. This revealed that across the four mutant variants there were actually only four clusters in total, with a few outlying poses, and that these were all distinct from the only cluster (cluster 0) in WT. Cluster 0, found only in WT, and for all WT poses, was unreactive with 7.5 < d < 8 Å. Results from Autodock Vina were consistent with Autodock 4.2 (Average d = 7.2 Å), although.
Vina also identified a rare pose with d = 4.8 Å suggesting catalysis might be theoretically possible in WT-TK but achieving the correct substrate binding may be frustrated due to binding predominantly into a non-productive location.
Of the four clusters in variants, cluster 4 was unreactive with 6.1 < d < 7.0 Å. TK2 contained the fewest potentially reactive poses (23%), with 67% in cluster 4. A further 10% for TK2 were in cluster 2b, a subcluster of cluster 2 but with a 180° relative rotation in the plane of the 3FBA ring leading to d > 9 Å. Clusters 1, 2 and 3 could be considered as “reactive”. Cluster 1 was preferred by TK1 and TK3, (even though TK3 found a single pose in cluster 2 with a lower binding energy). TK1 and TK3 each retained the wild-type arginine at residue 520. By contrast, Cluster 2 was preferred by TK2 and TK5, and these two variants contained the R520Q mutation.
To test the relationship between population, distance and experimentally determined catalytic efficiency, we created a score (Table 4 ) that weighted distance by population of poses, and linearly scaled between two cut-off values such that a distance of ≤ 4 Å scored 100% and ≥ 8 Å scored 0%. Given the limitations of docking methods this analysis does not account for dynamics or specific alignment of orbitals in favourable ways, such as at the Bergi-Dunitz angle for nucleophilic attack of a carbonyl 45 . However, intriguingly the score scaled well with k cat / K m (Fig. 3 ), confirming that control of the distance to enamine, and population distributions were both major factors in the observed kinetic differences between the variants.
How do mutations influence the binding populations?
Each pose was mapped to the active site structure by aligning them via their respective TPP-enamine cofactors, to then identify common interactions with the enzyme, and the potential role of mutations (Fig. 4 ). In previous studies, the WT enzyme was found to be inactive on the aromatic aldehyde substrate 3FBA 30 , 31 . The earlier 3M variant which included the S385Y and D469T mutations, first introduced activity towards 3-FBA in reactions with hydroxypyruvate 4 , 5 . A crystal structure of 3M and subsequent computational substrate docking of 3FBA, 4FBA and 3HBA, revealed that D469T mutation gave a hydrophobic surface for more favourable interactions with the nonpolar acceptor substrates 30 . The S385Y mutation meanwhile created opportunities for π–π stacking with the new substrates, but also sterically filled the active-site space to create a more defined binding pocket. Finally, the R520Q mutation decreased the potential for interaction with the carboxylate of 3-FBA or 4-FBA. Interestingly the docking in 3M previously showed two clusters arising from the formation of two distinct binding pockets with the aromatic ring positioned on either side of D469T. 3-FBA and 4-FBA oriented into different binding pockets, whereas 3-HBA could bind into either with no obvious preference. For 3-FBA the carboxylate group was oriented towards R91 to form a salt bridge interaction.
In the present study, docking of 3-FBA into the four key variants and WT, again revealed multiple potentially reactive binding modes, dominated by cluster 1 (Fig. 4 C and E) and cluster 2 (Fig. 4 D and F). Interestingly, the 3-FBA orientation observed in 3 M was no longer populated in the current variants, none of which made use of the previous interaction between 3-FBA and R91. Instead, the carboxylate groups remained oriented towards R520 in TK-1 (Fig. 4 C) and TK-3 (Fig. 4 E) as might be expected, but surprisingly also towards R520Q in TK-2 (Fig. 4 D) and TK-5C (Fig. 4 F). This orientation in TK-2 and TK-5C was supported through interactions with nearby H461 and to a lesser degree with R358, which were both also found in the WT docking of 3-FBA. However, the key difference to 3M appears to be the H100L mutation in the current variants. The imidazole sidechain of H100 interacts directly with the guanidinium of R91 in both WT-TK and 3M, and so the H100L mutation would directly modify the pKa for R91, most likely disfavouring ionisation and formation of a salt bridge to the 3-FBA substrate. This would explain the loss of the 3-FBA substrate orientation via R91 observed previously in 3M docking.
When comparing cluster 1 favoured by TK-1 and TK-3, to cluster 2 (preferred by TK2 and TK5), the key change was the R520Q mutation in TK2 and TK5. This mutation pulls 3-FBA slightly further away from the TPP-enamine (see distance d in Table 4 ), but also pulls it across the surface created by the D469T sidechain, thus enabling the aromatic ring face to rotate approximately 90 degrees and fit better into the cluster 2 binding pocket. By contrast, rotation of the 3-FBA ring face in TK-1 and TK-5, while maintaining the interaction with R520 would result in an unfavourable steric clash with the side chain of D469T. | Conclusions
While earlier attempts to combine aromatic aldehyde-accepting mutations (S385Y, R520Q) into pyruvate accepting variants led to losses in activity, the current small library of S385X and R520Q mutations within the “6M” scaffold resulted in variants with up to 630× increases in k cat for the reaction with 3-FBA and pyruvate. The best variants also retained enzyme stability as measured by their thermal transition midpoints. Variants with more modest 200× increases in k cat had stabilities 4–5 °C higher. The K m values of variants remained relatively high (80–530 mM) suggesting significant scope for further improvement, although in a biocatalytic process the aim would be to use high substrate concentrations that exceed the K m values achieved.
This work demonstrates that rational recombination of mutations does not always combine their respective attributes, e.g. improved acceptance of new donor and acceptor substrates in the current case. However, by using small libraries based on the mutational sites identified previously, we can often find a successful solution to combining the multiple attributes.
Computational docking of substrates into the variant enzyme active sites explained the effects of the mutations in terms of shaping the active site pocket as well as in guiding the orientation of the aromatic aldehyde and its proximity to the enamine-TPP intermediate. The distance to the enamine achieved, weighted according to the population of poses found in each clustered position, gave a good correlation to the observed k cat / K m , indicating a strong role of that distance in improving enzyme activity, even before considering the more detailed effects of protein dynamics or specific orbital alignments. | Improving the range of substrates accepted by enzymes with high catalytic activity remains an important goal for the industrialisation of biocatalysis. Many enzymes catalyse two-substrate reactions which increases the complexity in engineering them for the synthesis of alternative products. Often mutations are found independently that can improve the acceptance of alternatives to each of the two substrates. Ideally, we would be able to combine mutations identified for each of the two alternative substrates, and so reprogramme new enzyme variants that synthesise specific products from their respective two-substrate combinations. However, as we have previously observed for E. coli transketolase, the mutations that improved activity towards aromatic acceptor aldehydes, did not successfully recombine with mutations that switched the donor substrate to pyruvate. This likely results from several active site residues having multiple roles that can affect both of the substrates, as well as structural interactions between the mutations themselves. Here, we have designed small libraries, including both natural and non-natural amino acids, based on the previous mutational sites that impact on acceptance of the two substrates, to achieve up to 630× increases in k cat for the reaction with 3-formylbenzoic acid (3-FBA) and pyruvate. Computational docking was able to determine how the mutations shaped the active site to improve the proximity of the 3-FBA substrate relative to the enamine-TPP intermediate, formed after the initial reaction with pyruvate. This work opens the way for small libraries to rapidly reprogramme enzyme active sites in a plug and play approach to catalyse new combinations of two-substrate reactions.
Subject terms | Supplementary Information
| Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-024-51831-z.
Acknowledgements
We gratefully acknowledge the Engineering and Physical Sciences Research Council (EPSRC) for funding (EP/P006485/1) and also Marie Skłodowska-Curie Individual Fellowship (795539) that supported Arka Mukhopadhyay.
Author contributions
A.M. carried out the mutagenesis, purification and enzyme kinetics. A.M. and P.A.D performed data analyses and wrote the manuscript. P.A.D designed and supervised research.
Data availability
All data reported here are available on request to the corresponding author.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:57 | Sci Rep. 2024 Jan 14; 14:1287 | oa_package/1e/64/PMC10787763.tar.gz |
PMC10787764 | 38218910 | Introduction
The contribution of multiple individuals to group decision-making can bring substantial benefits 1 . Shared decisions can be more accurate 2 , 3 , for example homing pigeons ( Columbia livia ) have more direct homing routes when flying in dyads than when flying alone 4 . Shared decisions also allow all group members to acquire vital resources while remaining part of the group 5 , as they allow individuals in a state of need to influence group decisions 6 . While many studies have found evidence in support for shared decision-making, for example by observing a range of different individuals initiating movements 7 – 9 , the extent to which collective decision-making is governed by similar movement rules across species requires further investigation 10 , 11 .
Collective decisions can be an emergent outcome of the movement interactions among individuals 12 . The classic theoretical model that proposes this hypothesis provides two sets of testable predictions: (i) that the geometry of a conflict in preferences among initiators (the angle of their directional vectors) should determine the actions of followers, and (ii) that individuals should follow a majority rule when choosing which direction to follow 1 , 11 , 12 . The first prediction is that when faced with differences in the preferred direction of movement of group members, followers should average between directions if the disagreement among initiators is small (i.e. ‘compromise’) or choose one option over the other (i.e. ‘choose’) if the disagreement is large (above a critical angle 12 ). Greater disagreement (e.g. a larger angle between initiators and/or having more initiators proposing different directions) should also reduce the probability of following 13 . The second prediction is that when choosing a direction, followers should move where the majority of preferences are directed 12 . These key predictions allow quantitative comparisons of the processes driving collective decisions across different species. However, testing these predictions is challenging, as they require information about how potential decision-makers—both initiators and followers—move relative to one-another 14 .
Two studies have provided evidence for the geometric prediction of the aforementioned classic model of collective motion in semi-wild or wild animal groups 4 , 13 . GPS-tracking of pairs of homing pigeons showed that if the disagreement between the two birds’ directional preferences when flying back home was small, individuals averaged their routes. Instead, if disagreement was over a critical threshold, either the dyad split or one of the two birds became the leader 4 . However, as the study was conducted on dyads, there was no test of the classic model’s prediction on which direction individuals would choose when faced with large disagreement and a numerical difference between the clusters of concurrent initiators. That gap was covered by a study 13 that fitted GPS trackers to the majority of individuals in a troop of olive baboons, a species in which individuals form groups with very stable membership. By analysing the relative movements of individuals, and extracting initiations and following behaviours, the study showed support for the two sets of predictions for shared decision-making emerging from interactions among individuals. First, individuals were less likely to follow when there was greatest directional conflict among initiators, but when following, individual baboons averaged proposed directions by initiators when the disagreement was small and chose one or the other when the disagreement was large 13 . Second, when choosing a direction, individual baboons used a majority rule—moving in the direction with the largest number of initiators 13 . Thus, evidence is beginning to suggest that emergent decision-making processes might be relatively common across animals that move as groups, and could potentially be underpinned by a consistent set of individual decision rules.
One challenge with determining whether species use similar rules when making decisions is that careful replication is required. While the replication crisis in biology 15 – 17 largely stems from the incentive structures favouring novelty 18 , there are also logistical barriers to replication. For example, one recent study 19 testing whether the increase of CO 2 in the ocean impacts the behaviour of coral reel fish replicated previous experiments by examining a large number of captive fish (900) from multiple species (6) and across several years (3), matching the conditions of older experiments and finding low support for the original results. However, critics—right or wrongly—noted that methodological differences could also contribute differences in the results 20 , meaning that the true answer remains largely unknown. The challenges that are inherent with working with whole organisms, and with the different ecological conditions that they might experience in different studies, means that replications remain relatively rare. While large-scale collaborative networks 21 , 22 can overcome some of the barriers to making comparative studies, large-scale and long-term studies conducted in the wild often cannot be replicated, despite these being among the most influential 23 – 26 . A consequence of this is not only a lack of certainty in our scientific results, but also a lack of data on the generality of our findings.
Here, we conduct a within- and between-species replication study to investigate how consensus is achieved when individuals are faced with conflicting directional preferences among group members. We study vulturine guineafowl, a sympatric species to olive baboons that also forms stable and cohesive groups. A previous observational study on collective departures suggested that every member of a vulturine guineafowl group can initiate movement from a scattered food resource but that individuals excluded from clumped food patches are more likely to lead their group after receiving aggression 6 . These findings indicated that dominance can play a role in modulating leadership—at least in the context of departures from food patches. In the present study, we fit high-resolution solar-powered GPS-trackers to almost all adults from two groups of vulturine guineafowl and implement the same analytical procedure as the previous study on baboons 13 to determine how group members reach consensus across a broader range of movements. We first confirm that all group members can successfully initiate movement, and that males (who are on top of the dominance hierarchy 27 ) have greater influence on group movements than females. We then show that vulturine guineafowl express the same geometric properties and majority rule as predicted by the classic theoretical model of leadership and collective decision-making 12 , and match almost exactly the empirical results observed in wild olive baboons 13 . Our study provides a powerful replication of previous empirical work, enabling quantitative comparisons between observational and GPS-based methods, and between two taxonomically distant species that live in the same habitat. | Methods
Data collection
Our study population of vulturine guineafowl resides in a savannah-woodland ecosystem of approximately 12 km 2 in the southern part of the Mpala Research Centre (MRC) in Laikipia, Kenya. Vulturine guineafowls are large (~1.5 kg), predominantly terrestrial, and live in relatively large groups (13–65 adults) with largely stable membership 52 . Groups are not territorial and associate preferentially with specific other groups 52 .
GPS trackers
We fitted with GPS solar-powered tags almost all adult members from two groups of vulturine guineafowl. We programmed the GPS tags to simultaneously collect 1 Hz data every fourth day from 06:00 to 19:00 by allowing them to fully recharge over three days before starting a full day of operating. For the purposes of other research projects running at the same time 53 , we set one to two tags in each group to work on a daily schedule and during some months the tags of all individuals in focal Group 2 where programmed to work on a daily basis (see Supplementary Table 4 for the group size per month, number of tagged individuals, how long they were tracked and GPS tag programming setting, see also Supplementary Movies 1 – 6 for a demonstration of the whole-group tracking datasets of Groups 1 and 2). This ‘daily’ setting recorded one data point (date, time, coordinates) every second when the battery had a high charge (approximately every second to third day, for up to 8 h continuously). When the battery was at the next highest threshold, tags recorded 10 points spanning the first 10 s of every fifth minute. At the lowest battery threshold, tags recorded one point every 15 min (this setting was used less than 1% of the time). We downloaded data remotely every two to three days using a BaseStation II (e-obs Digital Telemetry, Grünwald, Germany).
We conducted census observations every two days (on average) to record changes on group size and the number of tagged individuals per group across the study period, as some individuals got predated or lost their tags. We summarise this information in Supplementary Table 4 .
Dominance hierarchies
First, to estimate the dominance hierarchy, we conducted all-occurrence sampling in each group, recording different types of agonistic interactions, as described by Papageorgiou & Farine 6 and Dehnen et al. 27 . For each observed interaction, we recorded the time, the winner, and the loser. We recorded data over at least 3 sessions, lasting 2-3 h each, per group, per week across the study period (restricted to days when simultaneous GPS tracking was not taking place). From the agonistic interactions data, we calculated a dominance hierarchy for each group using the randomised Elo scores method 54 .
To test if the dominance hierarchy remained stable during the study period, we calculated the repeatability score of ranks by randomising the order of the data, splitting the dataset in two halves, and calculating the Spearman rank correlation coefficients across the estimates of ranks from each half. We repeated this process 1000 times, using the function ‘estimate_uncertainty_by_splitting’ from the ‘aniDom’ R package 54 , to estimate a mean and 95% confidence intervals of the correlation values.
Data processing
We used (and adapted where necessary) the methods and published code developed by Strandburg-Peshkin et al. 13 . We repeated each of the following steps on the data from the two study groups separately.
Pre-processing GPS data
We used the built-in features from the Movebank data repository to remove the outliers from our dataset that were falling outside of our study area (<0.001% of the data, corresponding to points that were often outside of Kenya). In the rare cases when a tag failed to log one point (e.g. skipping one second, 0.16–0.21% of the data in both groups), we linearly interpolated missing points based on the existing data around that point from the same tag. More specifically, if there was a missing value at time t, between t − 1 s and t + 1 s, we added one point in time t, in the middle of the straight line connecting the two known points of t − 1 s and t + 1 s.
Extracting successful and failed initiation attempts at the dyadic level
We extracted movement initiations, and their outcomes by identifying maxima and minima in the dyadic distance between a given pair of individuals. The data between a minima and a maxima identified cases when an individual moved away from another (i.e. an initiation). The subsequent behaviour of individual between the maxima and the following minima determined the interpretation of the event. If moved towards the direction of , the outcome was defined as a “pull”, whereas if moved back towards , then the outcome was defined as an “anchor”.
We used a set of thresholds to remove pulls and anchors potentially arising from GPS noise or small movements. Specifically, we defined initiation events as only those in which the minimum change in distance between and was more than 3.5 m. We believe this threshold to be biologically relevant considering the scale that the movements of vulturine guineafowl take place, especially given their high degree of spatial cohesion. It is also above the error of the GPS tags, as our field testing suggested that the estimated relative position of two GPS tags is accurate to within 1 m more than 95% of the time 52 . Further, we determined that pull or anchor events required one individual doing a disproportionate amount of movement, setting a “disparity” threshold of 0.1, whereby 0 represents both individuals having moved equally during an event and 1 represents a single individual having done all of the moving during the event. Finally, we set a “strength” threshold to 0.1, which could range from 0 when the change in dyadic distance was very small relative to the total dyadic distance (i.e. small movements by individuals far away) to 1 when the change in dyadic distance was very large compared to the total dyadic distance (large movements by individuals that are in the same spot). The latter two are the same settings as the original study by Strandburg-Peshkin et al., whereas we set the minimum change in distance to a smaller value (3.5 m instead of 5 m) as vulturine guineafowl are substantially smaller and more cohesive in their movements than baboons. The dyadic distances throughout the process of initiation are shown in Fig. 1 , confirming the small distances over which leadership interactions take place in vulturine guineafowl.
We only kept in subsequent analysis events that took place when at least half of group members’ tags were collecting data, which largely matched the distribution of the data in the original baboon study. In that study, 80% of adults and subadults were tagged, however some tags stopped working for periods of data collection, meaning that as few as 16 of the 26 collared baboons (55%) collected data on some days 13 . We also applied, and present, the results using a threshold keeping only events when at least 80% of group members’ tags collected data at the same time. The results are presented in the Supplementary Note 1 of the Supplementary Materials (Supplementary Tables 5 – 8 and Supplementary Figs. 4 – 8 ) and show that the patterns in our results are not sensitive to the choice of threshold.
Identifying simultaneous initiation events
To investigate pulls and anchors beyond the dyadic level, we grouped together interactions (potential pulls and anchors) that operated simultaneously (i.e. involving one or more initiation attempts that overlapped in time) on one potential follower, and we defined this as an event. We considered interactions as overlapping in time using a chain rule, meaning that if interaction A overlapped with B, and interaction B overlapped with C, then all three would be combined into one event regardless of whether interaction A overlapped with interaction C. For each event, we calculated the direction of the initiators in relation to the position of the potential follower, whether the potential follower was pulled or not, and the direction of the subsequent movement of the follower if the follower was pulled. We defined events as successful if, and only if, at least one initiator was recorded as having pulled the potential follower. To test for a majority rule—whether followers moved in the direction with most initiators, we also clustered of initiators according to their direction using Gaussian Mixture Models 55 .
Statistics and reproducibility
Does dominance predict influence?
To investigate if dominance predicts influence within each group, we created a matrix representing the relative influence among dyads, with the influence index in dyad, and defined as , where represents the number of events individual pulled individual . The index ranges from −1 ( pulled in all events) to 1 ( pulled in all events), with 0 representing no difference in influence among the two individuals. From these data, we calculated influence ranks by summing each individual’s indices and ranking these sums such that individuals with a larger sum were considered to be more influential.
We examined the effects of dominance and sex on influence rank by running four permutation tests:
(i) We calculated the mean absolute difference between dominance rank and influence rank for each individual. If dominant individuals were more highly ranked in the influence matrix, then we expected this value to approach 0. We evaluated the significance of our measure by recalculating the same value 1000 times after randomising the order of individuals’ dominance ranks relative to their influence ranks.
(ii–iii) We tested whether there was a within-sex effect of dominance by conducting the same test as (i) in males and females independently.
(iv) We tested whether males were more likely to be on the top of the influence hierarchy by calculating the mean of the absolute difference between two binary variables. The first variable represented whether an individual was within the top n-ranked more influential individuals, where n represented the number of males in the group. The second binary variable represented whether the individual was a male or not. We then re-calculated this value in 1000 permutations randomising the link between the two binary variables.
Then, we also examined the effects of dominance and sex on the successful initiation rates (pulls) per hour. To do this, we ran permutation tests similar to (i-iv), but we replaced influence rank with the individuals ranked according to the rates of successful initiations per hour.
In all the permutation tests, we considered an effect to be significant at α = 0.05 if the observed value was closer to 0 than 95% of the values generated by the permuted datasets.
Finally, to examine the effect of sex in the success rate of initiating, we extracted cases in which there were two simultaneous pullers comprising one male and one female initiator. We then calculated the proportion of time the male was the successful puller. We tested the significance of this measure by randomising the sexes across all of these events 1000 times. This allowed us to test if males were more successful pullers than expected by chance. As above, we considered an effect to be significant at α = 0.05 if the observed value was smaller than the probabilities of 95% of the permuted datasets.
How do the agreement and number of initiators affect whether initiators are successful?
We first tested the factors that contributed to individuals’ decisions about whether to follow or not. From the full set of events, we constructed a generalised estimating equations (GEE) model testing whether a focal individual would follow or not (binary response variable where pull = 1 and anchor = 0) was predicted by the level of agreement, and the number of initiators. We quantified directional agreement among simultaneous initiators using the circular variance (cv) of the unit vectors pointing from the potential follower to each initiator in the event, and defined agreement as 1 − cv. Values of agreement are close to 0 when individuals initiate in opposing directions and approach 1 when all individuals initiate in the same direction. Given that events, include all simultaneous initiations by default, our GEE did not include an autocorrelation structure. We used the R package ‘geepack’ 56 to fit the GEE model.
How does the angle between initiators affect where followers move?
We tested whether the angle between initiators predicted where individuals moved in cases where a guineafowl did follow an initiation, focusing on events comprising two initiators. To identify which regime followers used (compromise or choose) for a given angle of disagreement, we ran a dip test of bimodality and a converging modes test (using the method developed by Hartigan & Hartigan 57 , and the code from Strandburg-Peshkin et al. 13 ). If vulturine guineafowl were in the compromise regime, then the distribution of angles taken by the follower would not be significantly bimodal (according to the dip test) and would be more unimodal than expected by chance (according to the converging modes test). If neither of these conditions held, then vulturine guineafowl were in the choose regime. We interpreted situations in which one condition held but not the other as demarking a transition between the compromise and choose regimes. We ran these analyses by combining events into 12 degree bins of angular disagreement. We conducted this analysis independently on both groups, using code developed by Strandburg-Peshkin et al. 13 .
Where do guineafowl move when they choose one direction versus the other?
Finally, we investigated which direction followers chose when in the choose regime by examining the numerical difference among clusters of simultaneous pullers. Specifically, we expand the analysis in the previous section by looking at all events with more than one puller. In each of these events, we used a circular clustering algorithm 13 to identify clusters of individuals pulling in similar directions. We then extracted all of the events containing two clusters, and counted the number of individuals in each of the clusters. We then identified which of these clusters was successful, and related this to the numerical difference in the size of each cluster.
If guineafowl follow a majority rule, then they should be much more likely to follow numerically larger clusters. Following Strandburg-Peshkin et al. 2015 13 , we first fit a non-linear least square model where the response variable was the probability of choosing a randomly allocated cluster 1, while the predictor was the numerical difference between the number of individuals in cluster 1 minus the number of individuals in cluster 2. We then estimated the uncertainty for each bin (i.e. for each numerical difference between the size of initiating cluster 1 minus the size of initiating cluster 2) by drawing n samples from a uniform distribution, where n is the number of events in that bin, and calculating the probability that the random values are less than or equal to the observed probability. We repeated this process 1000 times, and extracted the lower 2.5th and upper 97.5th quantile of these probabilities as a measure of the 95% confidence intervals.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. | Results
Sex but not dominance determine who has influence
Decisions by animal groups can be despotic, where one individual decides 28 , partially shared (or graded), where some individuals contribute to decisions more than others 29 , or fully shared, where all individuals have an equal influence 13 . When contribution to decision-making is unequal, it has generally been predicted that dominant individuals should have greater influence 30 , but this has received mixed support 28 , 31 – 34 .
We first explored whether leadership in vulturine guineafowl is fully shared or graded by quantifying the role of dominance on influence. Vulturine guineafowl groups have steep dominance hierarchies (see Supplementary Fig. 1 ), which remain stable for several months 6 , 27 . We applied the approach described by Strandburg-Peshkin et al. 13 to infer who initiates movements and who follows, based on dyadic movement patterns from the GPS tracks collected simultaneously across group members (see Methods for details on GPS tracking). Initiation attempts were characterised by an increasing inter-individual distance followed by a decreasing inter-individual distance (see Methods and Fig. 1 ). Depending on the relative contribution of each individual to the change in distance, initiations were classified as being successful (‘pulls’, where A moves to increase the distance and B moves to subsequently reduce the distance) or unsuccessful (‘anchors’, where A moves to increase the distance but, subsequently, decreases it by moving back towards B).
Summarising 502,253 leader-follower cases from two social groups, we confirm that all group members can initiate movement and pull others, but that there is a distinct subset of individuals that are more likely to be followed (Fig. 2A, D ). To investigate the relationship between dominance and the probability of being followed, we ran permutation tests within and between sexes (Fig. 2B, C, E, F , see the Methods section for details on the permutations). While it appears that more dominant individuals are more likely to be followed (Fig. 2B i, E i ), analyses controlling for sex show it was rather that males, who are dominant over females, are more likely to be followed (Fig. 2B ii-iii, C, E ii-iii, F ), and that there is no effect of dominance within sex. In a two-puller context comprising one male and one female initiator and where followers choose one direction (see below), the effect of sex translates to a difference in success rate of approximately 10% (Group 1; P male success = 0.543, Group 2; P male success = 0.553). However, success is not only determined by the probability of being followed, but also by the rate of initiating. When considering the number of successful initiations for each individual, we again find no effect of dominance but a consistent effect of sex (Fig. 3 ).
Our results show that leadership in vulturine guineafowl is shared, aligning with the previous work on olive baboons 13 and with direct observations in this system 6 . However, unlike in baboons, leadership in vulturine guineafowl is not completely equal. Instead, it is graded, with males being more likely to be followed and initiating at higher rates (on average) relative to females. This difference to baboons may relate to the fact that males, who are dominant, are also the philopatric sex in vulturine guineafowl 35 . Staying in their natal group potentially allows guineafowl males to maintain life-long, and thus stronger, influence relationships with other members of their group. The natal sex also has more information about the local landscape than the dispersing sex, which may contribute to observed differences (though we note that the baboon study 13 did not explicitly tested for a sex difference). This difference may not, however, always play a role in decision-making. Females were often as successful at initiating as male group members, and many females initiated movements more often than some males. The relatively small differences between male and female vulturine guineafowl is likely to reflect the relatively low rates of conflict in most of their collective movements. We found that guineafowl are substantially more likely to follow initiators ( P success range: 0.7–0.9) than baboons ( P success range: 0.2–0.8) 13 . This is likely to explain the high degree of cohesion and small intra-group dispersion of vulturine guineafowl.
Individuals are more likely to follow when initiators agree
We aggregated the simultaneous initiation attempts acting on single candidate follower individuals into ‘events’ based on their overlapping start and finish timestamps, following Strandburg-Peshkin et al. 13 . While initiation attempts lasted on average for 2.4 min (SD = 2.7), the temporal overlapping nature of initiations that were combined into events meant that events lasted longer than the initiation attempts themselves, with an average of 7.0 min (SD = 6.2; Supplementary Fig. 2 ). For each event, we calculated the direction of the initiators in relation to the position of the potential follower and their directional agreement. Directional agreement ranged from 0, when the movement vectors of initiators were equally distributed over potential directions, to 1 when the movement vectors were perfectly aligned (see Methods section for more details). We also noted whether the potential follower was subsequently pulled or not, and calculated the direction of the movement of the follower if the follower was pulled. We defined events as successful if at least one initiator pulled the potential follower.
We found that the number of simultaneous initiators, the level of their directional agreement, and the interaction between these two, all predict the probability of following a given initiation. In both groups, increasing the number of initiators has a positive effect on following when the angular agreement was high, but a negative effect on following when the agreement was low (Fig. 4 ; Supplementary Table 1 ). Although the results are consistent across both groups, only the interaction is significant in Group 2 (for which we collected substantially fewer data; see Supplementary Table 2 ). Supplementary analyses that account for changes in the number of tracked individuals and the distance between initiators and potential followers confirm that our results are robust to variation in data collection and to the assumptions of the methods.
The interaction between agreement and the number of initiators on the tendency for vulturine guineafowl to follow, matches closely with the behaviour of baboons. Specifically, baboons also require greater agreement when there are more initiators in order to follow 13 . In vulturine guineafowl, the patterns are also very similar across both groups: having more simultaneous initiators requires a higher agreement for individuals to follow, and high levels of agreement (>0.6) generally result in a better-than-chance (>0.5) probability of an event being successful. Baboons appear to be more tolerant of disagreement, with any agreement over 0.3 producing a better-than-chance probability of an initiation being successful 13 .
Followers compromise the initiation directions when initiators agree but choose a direction when initiators disagree
For each successful event, we tested the theoretical prediction 12 that the angular agreement of the initiators should determine where a follower moves next. For simplicity, in this particular test we focused on events comprising two initiators (17.160% of all events for Group 1 and 18.157% of all events for Group 2, see Supplementary Table 2 ), allowing us to calculate the angle between the initiators relative to the potential follower. If a follower moves in a direction that averaged the angle between initiators (i.e. ‘compromise’), then we expected a unimodal distribution in the directions taken by followers across repeated observations at a given angular disagreement. By contrast, if a follower ‘chooses’ one or the other direction, then we expected a bimodal distribution in the directions taken by followers across repeated observations with the same angle of disagreement.
We found that the direction taken by vulturine guineafowl followers has identical properties to those predicted by theory and those found in baboons. Specifically, in both guineafowl groups, followers compromise the initiated directions when the disagreement between initiators is below a critical threshold that separates the two regimes, and choose one direction versus the other when the disagreement is above the threshold (Fig. 5 , Supplementary Fig. 3 ).
As with previous results, we found strong concurrence between vulturine guineafowl and baboons. Baboons also express a transitional phase from compromise to choose, which is estimated to range between 72 and 96 degrees 13 . In Group 1, we found that the lower end of the transitional phase from compromise to choose is almost identical to that of baboons (78 degrees), but that the upper end is much higher (130 degrees). In Group 2, we could only find a transition threshold, which is estimated to be 117 degrees. But, as estimates for Group 2 are based on substantially fewer data, we expect that adding more data will reveal a larger range of uncertainty as in Group 1. The data from Group 2 do, however, also suggest that the upper end of the transition phase to the choose regime takes place at a larger angle in vulturine guineafowl than in baboons.
Followers move in the direction of the majority when choosing
To find where a follower moves when in the choose regime, we focused on cases when two or more individuals initiated toward different directions. We used a spatial clustering algorithm to identify sets of individuals co-initiating in similar directions, extracted cases in which there were exactly two clusters, and counted the number of individuals initiating in each of the two directions.
As predicted, we found support that vulturine guineafowl employ a majority rule when choosing one versus the other direction to move in (i.e. at high levels of disagreement, on the right side of each panel of Fig. 5 ). Specifically, in both Group 1 and Group 2, followers are disproportionately more likely to move in the direction containing the largest cluster of initiators (Fig. 6 , Supplementary Table 3 ).
Our results confirm that vulturine guineafowl use a similar majority rule to baboons when choosing between directions. In baboons, individuals have an 80% chance of choosing the majority when the difference between the number of initiators in each cluster is three or more. By contrast, the model fits predict that vulturine guineafowl require a larger numerical difference (a difference of 8–9 for Group 1 and 4–5 for Group 2) to reach the same level of discrimination. | Discussion
Our study shows that the movements of vulturine guineafowl are consistent with the predictions from a classic theoretical model of leadership and collective decision-making, and have striking similarities to the movements described in taxonomically distant but sympatric olive baboons 13 . In both our guineafowl study groups, we found that any individual could initiate movement, with no direct link between dominance and influence. Male guineafowl are more likely to be followed than females, and also have slightly higher rates of initiations. However, females still initiated often, and many had a high number of successful attempts. Like in baboons, conflicts in vulturine guineafowl group decisions affect the probability that initiators are followed and, when they do, follower movements fall into one of two regimes: when the disagreement between concurrent initiators is small followers average the directions of the initiators and when the disagreement is large they choose the direction with the most initiators. Our study also demonstrates the importance of replication in ecology and animal behaviour 15 , 16 , 36 , showing that by following the same methods and conducting the same statistical tests we could reveal that the emergence of collective decisions from simple rules governing group cohesion are likely to be consistent across very distinct taxonomic groups.
While influence can be distributed within the group 37 , whereby all individuals can initiate movement and be followed, it is not necessarily equal among group members 29 , 38 , 39 . For example, homing pigeons form influence hierarchies during flight and these hierarchies determine whom an individual is likely to lead and are most likely to be led by Nagy et al. 32 . However, these influence hierarchies are independent from dominance hierarchies 33 . In vulturine guineafowl, we found that leadership is generally shared, but that males are more likely to be followed and initiate more often than females. This difference reflects the social structure of vulturine guineafowl societies, where males are dominant over all females 6 , 27 , who are also the dispersing sex 35 . However, males have been found to be more influential than females in collective departures also in species in which female matrilines dominate aggression hierarchies 7 , 40 , suggesting that neither the dominance hierarchy alone, nor the dispersal tendency of the sexes, always determine which individuals influence group coordination. Further, while the differences we found between males and females are significant, females still had substantial influence over where groups moved. One key outstanding question is therefore to identify whether there are specific contexts in which the ability to exert influence (i.e. have a higher probability of being followed) may be important.
Within each sex group of vulturine guineafowl, we found no evidence for dominance playing a role on influence (although in both groups, the lowest ranking female initiated less often than almost any other group member). While it appears that female vulturine guineafowl have overall less influence than males on where their group goes, unlike in other species 41 – 45 , it is also possible that females exhibit specific strategies to influence decisions 46 , 47 . For example, they could influence when groups leave 6 , as has been suggested in baboons 48 , with timing decisions potentially reflecting a distinct axis of decision-making 13 , 49 . Such routes of influence would not be obvious from the analytical approach that was employed in the current and baboon studies 13 . Rather, our approach likely captured a number of very general collective movements, including many moment-by-moment decisions (e.g. which way to move around a tree). Identifying the functional importance of each decision (e.g. those that dictate where groups move next at larger spatial scales) remains a challenge in the field.
Despite the properties of follower movements in response to initiators being very similar across vulturine guineafowl and baboons, details may vary from group to group 50 and from one context to the next 51 . In theory, larger group sizes should be associated with a decrease in the angle at which follower movements transition from compromise to choose 12 . Group 1 was similar in size to the previously studied baboon group, and their transitional phase overlapped 13 . Further, the transitional phase of Group 1 started at values that were almost 40 degrees smaller than the smaller Group 2, albeit it also went beyond that of Group 2 (Fig. 6 ). Given this overlap, we can’t safely draw conclusions on whether our findings support the theoretical predictions that indicate that larger groups show a smaller transitional angle, and therefore data on more groups of different sizes are required to address this question. The potential influence of the social, as well as the physical environment is worth exploring, given that environmental effects have already been documented across various facets of collective behaviour 51 .
One behaviour where we could find some clearer differences between groups is when looking at the majority rule employed by each. In vulturine guineafowl, the smaller group (Group 2) appeared to require a smaller threshold in order to identify a clear majority. A key question is whether the shift in the threshold scales with group size. Our data suggest that the larger group (Group 1) reliably chose the majority (80% of the time) when there was a higher proportional difference (approximately one third of the group 1) in initiators compared to the smaller group (approximately one quarter of the group 2). Baboons required a much smaller majority to reach the same 80% threshold 13 . Given that the baboon troop that was studied was larger than Group 1 (and had a very similar proportion of GPS-tracked group members), these results suggest that discrimination may be harder in larger groups, and that baboons could have a better capacity to discriminate smaller relative differences than vulturine guineafowl.
Our results show that the processes driving the movement patterns of wild group-living vulturine guineafowl are largely consistent with those previously described in a group of wild baboons 13 and in dyads of homing pigeons 4 , with the specific directions of movements by individuals and responses to conflict when following in particular matching those of olive baboons. Our work adds to the weight of support for predictions arising from a classic theoretical model of leadership and collective decision-making 12 . Further, by carefully conducting a large-scale within- and between-species replication, we propose that a multitude of group-living species could exhibit highly convergent processes governing how they reach consensus on where to move. | Shared-decision making is beneficial for the maintenance of group-living. However, little is known about whether consensus decision-making follows similar processes across different species. Addressing this question requires robust quantification of how individuals move relative to each other. Here we use high-resolution GPS-tracking of two vulturine guineafowl ( Acryllium vulturinum ) groups to test the predictions from a classic theoretical model of collective motion. We show that, in both groups, all individuals can successfully initiate directional movements, although males are more likely to be followed than females. When multiple group members initiate simultaneously, follower decisions depend on directional agreement, with followers compromising directions if the difference between them is small or choosing the majority direction if the difference is large. By aligning with model predictions and replicating the findings of a previous field study on olive baboons ( Papio anubis ), our results suggest that a common process governs collective decision-making in moving animal groups.
A GPS study on wild vulturine guineafowl reveals shared decision-making similar to that in baboons and theoretical models. In directional conflicts, followers compromise for small differences and choose the majority’s direction for large differences.
Subject terms | Supplementary information
| Supplementary information
The online version contains supplementary material available at 10.1038/s42003-024-05782-w.
Acknowledgements
We are grateful to Ariana Strandburg-Peshkin, Vivek Hari Sridhar, Margaret C. Crofoot, and Dora Biro for feedback on early versions of the manuscript as well as to five anonymous reviewers. We also thank Wismer Cherono and John Ewoi for field assistance. The research was funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement no. 850859 awarded to D.R.F.), the Max Planck Society, and grants awarded to D.R.F. from the Daimler und Benz Stiftung (32-03/16) and the Association for the Study of Animal Behaviour. D.P. received additional funding from a DAAD PhD fellowship and an Early Career Grant from the National Geographic Society (WW-175ER-17).
Author contributions
D.P. and D.R.F. conceived, designed the study and performed the analysis. D.P. and B.N. collected the data. D.P. and D.R.F. drafted the manuscript. D.R.F. supervised all aspects of the study. All authors contributed to revisions.
Peer review
Peer review information
Communications Biology thanks Tamas Vicsek, Lisa O’Bryan, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Primary Handling Editors: Joao Valente. A peer review file is available.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Data availability
Processed data can be found on Figshare. 10.6084/m9.figshare.24850551. Raw GPS data are stored on https://www.movebank.org .
Code availability
We used the code from Strandburg-Peshkin et al. 2015 Science. Our adjusted code can be found on Figshare. 10.6084/m9.figshare.24850551.
Competing interests
The authors declare no competing interests.
Ethics approval
Our study involves critical participation reflected on authorship and supervision by local (i.e. Kenyan) academics. We have complied with all relevant ethical regulations for animal use and we obtained permits from the National Science and Technology Council (NACOSTI permit: NACOSTI/P/16/3706/6465), the National Environment Management Authority (NEMA permit: NEMA/AGR/67/2017), the Kenya Wildlife Service (Research Authorizations and Capture Permits), the National Museums of Kenya (NMK). We especially thank Dr. Peter Njoroge from the Ornithological Section of NMK for providing useful feedback and reviewing our work but also the Mpala Research Centre, and the Max Planck Society’s Ethikrat Committee for ethical permission to conduct this research. | CC BY | no | 2024-01-15 23:41:57 | Commun Biol. 2024 Jan 13; 7:95 | oa_package/85/38/PMC10787764.tar.gz |
|
PMC10787765 | 38218947 | Introduction
Implantable bioelectronic devices play an increasingly important role in disease prevention, monitoring, and treatment 1 – 3 , and it is one of the most rapidly emerging fields in medicine 4 – 6 . As a representative of bioelectronics therapy, the cardiac pacemaker is a powerful tool for bradycardia and heart block therapies 7 . With the upgrading and iteration of technology, pacemakers are becoming intelligent and multi-functional 8 – 11 . Nevertheless, conventional pacemakers are associated with several complications, such as lead insulation breaks, pocket hematoma, local bulges, and scarring of the skin 12 . Moreover, some patients cannot be implanted with conventional pacemakers due to their comorbidities and venous system defects. To address these lead- and device-pocket-related issues, leadless pacing technology has been proposed. A Leadless pacemaker 13 integrating pacing leads with a pulse generator is very small and can be implanted into the cardiac chamber only through a catheter, which not only reduces pain, and prevents trauma and related complications, but also decreases the risk of infection. Additionally, patients can hardly feel the presence of the pacemaker after implantation, which greatly improves their quality of life.
However, it should be noted that the application of implantable bioelectronic devices, including leadless pacemakers, faces various challenges related to power supply in long-term operation 14 , 15 due to battery capacity limitations 16 – 18 . Besides, leadless pacemakers are costly and difficult to remove after implantation. Meanwhile, it is difficult to wirelessly recharge a leadless pacemaker because cardiac chambers located in the mediastinum are flooded with blood. In recent years, technologies based on electromagnetic 19 – 21 or piezoelectric effects 22 – 27 for converting biomechanical energy into electrical energy have been proposed. For electromagnetic technology, the permanent magnet is heavy and not suitable for magnetic resonance imaging (MRI) examinations 28 , which may affect the normal physiological beating of the heart. Its performance is also limited by the beating frequency of the heart 29 . For piezoelectric technology, due to the inherent strength of the heartbeat, the output voltage of piezoelectric devices is generally difficult to meet the pacing threshold requirements 30 . Therefore, it is necessary to develop new energy supply strategies for leadless pacemakers. As a new generation of biomechanical energy harvesting technology, triboelectric nanogenerator 31 – 34 provide an effective solution for a leadless pacemaker. Currently, triboelectric nanogenerators with different structures and materials have successfully converted biomechanical energy into electrical energy 35 – 37 .
Herein, we proposed a self-powered intracardiac pacemaker (SICP) with a capsule structure for harvesting biomechanical energy from cardiac motion based on the nanogenerator technology. The device can be placed in the right ventricle through an intravenous route by a delivery catheter. The SICP integrates the energy harvesting unit (EHU) and power management unit (PMU) & pacemaker module (PM). In a laboratory experiment, the open-circuit voltage ( V oc ), short-circuit current ( I sc ), and short-circuit charge ( Q sc ) of the device were about 21.8 V, 0.25 μA and 6.4 nC, respectively. We demonstrate that SICP can recharge its PMU by EHU. The material and structural design creates a lightweight, miniature device that maintains stable energy harvesting performance and excellent biocompatibility in vivo. Testing in swine models showed capabilities in the treatment of arrhythmia. Taken together, this work provides an alternative strategy for harvesting biomechanical energy via a minimally invasive approach, which may effectively improve the service life of leadless pacemakers. These findings show that SICP lays the energy foundation for the development of next-generation implantable bioelectronics. | Methods
Fabrication of SICP
The enclosure of SICP was printed by a 3D printer (PHTON MONO) using UV-sensitive resins (ANYCUBIC Photon). Nanostructured PTFE film (50 μm) and POM pellets (diameter: 1.588 mm) were employed as triboelectric layers, processed by an inductively coupled plasma etching system (SENTECH/SI 500). In detail, the PTFE films and POM pellets were rinsed with alcohol and deionized water. The Au, which acted as the mask for the etching process, was sputtered onto their surfaces for about 30 s. Then the PTFE films and POM pellets were etched by ICP reactive ionic etching for 300 s (ICP power: 400 W and 100 W) and 150 s (ICP power: 500 W and 150 W), respectively. The reaction gas in the ICP process was CF4 (30.0 sccm), O 2 (10.0 sccm), and Ar (15.0 sccm). The two Au electrodes were deposited on the back of nano-PTFE by magnetron sputter (Denton Discovery 635) parallelly for 15 min (sputter power 50 W), the gap between them was about 1 mm. The Au bottom electrodes were polarized by connecting with wire to ground with a voltage of 4.5 kV for 15 min through the corona needle. The composite film was attached to the inner surface of the enclosure. The POM pellets were placed in the cavity of the energy harvesting unit. Altium Designer software was used to design the power management unit and pacemaker module. The custom control chip is an ultra-low power chip, which has the characteristic of a low power supply voltage range. The circuit board adopts the welding process of surface mounted technology (SMT). The spiral platinum-iridium alloy was attached to the bottom of SICP as cathode and the ring of platinum-iridium alloy was attached to the waist of SICP as anode. Narrow strips of tungsten sheet (thickness: 50 μm, width: 1 mm) were rolled into rings and fixed to both ends of SICP as a radiopaque marker. Nickel alloy wires were bent into an arch (diameter: 0.3 mm, length: 10 mm) and crossed through holes in the bottom of SICP as a hook to anchor in the myocardial wall.
Encapsulation of the SICP
The ethyl cyanoacrylate (Aibida, Guangzhou, China) was employed to close the seams of the enclosure. Then the one-component UV light-curing adhesive (8500 Metal, Switzerland) was spin-coated on the enclosure as the package layer and then cured under UV light for 10 s. The holes in the enclosure that led out the wires need to be sealed with light-curing glue several times. Finally, the parylene-C particles were steamed at 135 °C/690 °C and then deposited on the surface of SICP where the thickness of parylene is 3 μm.
Cell viability
The L929 cells (fibroblasts, GNM28) were acquired from the Cell Bank of the Chinese Academy of Sciences in Beijing, China. After being cultured to a stable stage, L929 cells were collected and seeded on 96-well tissue culture polystyrenes (TCPs, Corning, USA) with a density of 1 × 10 6 cells/ml. The culture dish of the experimental group deposited encapsulation materials. 20 μL CCK-8 solution (Solarbio, China) was mixed with 280 μL cell culture medium for each well. The cells were cultured for 1, 2, and 3 days and evaluate the cell viability every day. After incubating the cells with CCK-8 solution for 1 h at 37 °C, 200 μL cell culture supernatant was transferred into a 96-well plate (set at least three repetitions for each group). The absorbance of the solution in the 96-well plate was measured at 450 nm with a microplate absorbance assay instrument (Bio-rad iMark, USA).
Cell morphology and immunofluorescent staining
Cell morphology could show the cellular growth condition on different substrates. The cytoskeleton and nucleus were stained with Phalloidin (Abcam, USA) and 4′, 6-diamidino-2-phenylindole (DAPI, Solarbio, China). The L929 were cultured for 1, 2, and 3 days on the naked 96-well disposable confocal dish, and the dish deposited encapsulation materials for cell morphologic observation. Before staining, the cells were rinsed with phosphate buffer solution (PBS, Solarbio, China) three times gently, then fixed with 4% paraformaldehyde (Solarbio, China) for 10 min and permeabilized with 0.1% Triton X-100 (Solarbio, China) for 10 min. Finally, the fixed cells were stained with Phalloidin for 40 min, DAPI for 10 min at room temperature, and washed with PBS three times. The stained cells were visualized by the laser scanning confocal microscope (SP8, Leica, Germany) under the filter at E x / E m = 493/517 nm.
Platelet adhesion tests
Rats were anesthetized with 2% isoflurane (RWD, R510-22-4). After the location of the abdominal aorta was determined by laparotomy, 2 ml of fresh blood from arterial blood was collected using a blood collection needle. The collected blood was placed in a centrifuge tube for 30 min. The cells were centrifuged at 110 × g for 10 min in a centrifuge. Platelets were removed and incubated on sterilized encapsulation coating material for 90 min at room temperature. After 30 min of fixation with 4% paraformaldehyde (Solarbio, China), platelets were dehydrated in serial concentration gradients of ethanol (50%, 60%, 70%, 80%, 90%, 95%, and 100%) for 30 min. After evaporation to dryness at room temperature, the surface of the encapsulation coating material with platelets was visualized by SEM.
Hemolysis assay
Working solution: positive group: 0.3% Triton X-100 (Solarbio, China); Negative group: normal saline (0.9% NaCl); Material group: 1 mg/ml encapsulation coating material leaching solution (1 mg encapsulation coating material was immersed in 1 ml of normal saline for 3 days). 1 ml of fresh blood from male 6-week-old SD rats (220 g) was placed in a 15 ml centrifuge tube, and the supernatant was removed after 4 ml PBS (Solarbio, China) was washed 4–5 times (170 g, 5 min). The washed red blood cells (RBCs) were resuspended by 10 ml PBS (Solarbio, China). 0.2 ml resuspended red blood cells were mixed with 0.8 ml working solution and incubated for 4 h. The mixture was centrifuged in a centrifuge at 170 × g for 5 min and photographed for recording. The supernatant was removed and the OD value (541 nm) was measured by a microplate reader. %hemolysis = (OD test – OD neg )/(OD pos – OD neg ) × 100%.
Histology
The tissues were fixed in a 4% paraformaldehyde (Solarbio, China) overnight at room temperature, followed by dehydration using a series of graded ethanol and xylene solutions (Solarbio, China). Subsequently, routine paraffin embedding was carried out, and tissue sections with a thickness of 4 μm were obtained using a microtome. To visualize the tissue morphology and extracellular matrix, hematoxylin-eosin (HE) and Masson’s trichrome staining (Solarbio, China) were performed with standard procedures. The resulting images were observed under a light microscope.
Animal preparation
All experimental processes were strictly in line with the institutional and national guidelines for the care and use of laboratory animals and the study protocol was reviewed and approved by the Ethical Committee of the Animal Experimental Center in the State Key Laboratory of Cardiovascular Disease and Fuwai Hospital (0103-1-1-ZX(Y)−3). The swine were anesthetized and intubated with respirators for artificial respiration, and then ECG and femoral artery pressure were recorded by the data acquisition hardware (MP150, BIOPAC System, INC.). Adult swine (age: 1.2–2.0 years; weight: 50–65 kg) were used ( n = 8, female = 5, male = 3). Rats (male, 220 g, 6 weeks of age, n = 10) were purchased from the Beijing Vital River Laboratory Animal Technology Co., Ltd., China.
Induction of AVB
Complete atrioventricular block (AVB) in the swine model was induced by radiofrequency ablation. The right femoral vein was dissected and cannulated with an 8F sheath to introduce a contact force catheter (Thermocool SmartTouch, Biosense Webster Inc., Diamond Bar, CA). Electroanatomic reconstructions of the cardiac structures (superior vena cava (SVC), inferior vena cava (IVC), and right atrial (RA)) were performed using a three-dimensional mapping system (CARTO3, Biosense Webster, Inc., Diamond Bar, CA). After mapping a (or cluster of) near field His-bundle (HB) potential, radiofrequency ablation with a power of 35 W and a duration of 120 s was performed in this HB region (ablation target). The signs of successful ablation were as follows: (1) the emergence of complete AVB; (2) the occurrence of the escape rhythm. The ablation catheter with the 8F sheath was then withdrawn and the incision of the right femoral vein was sutured.
Implantation of SICP
First, the right-side neck skin was prepared with a tincture of iodine solution. Then, the external jugular vein was exposed with a small incision. Next, a low-dose bolus of heparin (2000 to 4000 units) was delivered intravenously. After the vein was punctured, the next step was to guide a stiff guidewire (RF*GA35153M: Terumo Corporation, Tokyo, Japan) down into the IVC. Under fluoroscopic guidance, the delivery sheath was advanced through the external jugular vein and into the right atrium (RA) over the guidewire. The radiopaque marker band on the tip of the outside sheath should be positioned in the mid-RA, followed by retracting the inside sheath. Then the homemade delivery catheter integrated with SICP was advanced across the tricuspid valve into the right ventricle via the outside sheath. Once SICP was deployed with adequate fixation, the homemade delivery catheter with the outside sheath was retracted from the external jugular vein leaving SICP in the final position of the right ventricular apex. Upon removal of the sheath, the right-side neck skin and the vein were sutured.
Characterization methods
The scanning electron microscopy (SEM) images were taken by a Hitachi field emission scanning electron microscope (SU 8020). The V oc , I sc , and Q sc were detected by an electrometer (Keithley 6517B) and recorded by an oscilloscope (Teledyne LeCroy HDO6104). The tensile test of SICP was performed by the ESM301/Mark-10 system. Optical photographs of the movement of the pellets in the cavity were exhibited by a high-speed digital video camera (FASTCAM Mini AX200 JAPAN). Optical photographs and movies of SICP fixed on the endocardium of the right ventricle in an isolated heart were recorded by an endoscope (BRK pt50).
Statistical analysis
All experiments were repeated at least three times. Data were analyzed as mean ± standard deviation (SD). The statistical significance of the differences is determined by a tailed t -test. ns was considered no significant differences. Origin 2018, GraphPad Prism version 8.0. and Excel were used for data analysis and plotting.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. | Results
Design features and materials of SICP
The schematic illustration in Fig. 1a shows a battery-free and transcatheter SICP in the right ventricle. As shown in the partially enlarged drawing, this capsule-shaped device consists of EHU, PMU&PM, hooks, and radiopaque markers, which can be integrated with a customized delivery catheter system for interventional implantation in the heart through the intravenous route. The SICP fixed on the right ventricular endocardium through the design of the hook structure converts biomechanical energy from cardiac motion to electricity. Figure 1b displays the internal perspective structure of the device. The resin shell of SICP is constructed by stereolithography, the surface of which is deposited Parylene-C by the parylene coating system to form a waterproof coating with good biocompatibility (Supplementary Fig. 1 ). Polyformaldehyde (POM) pellets, and polytetrafluoroethylene (PTFE) film deposited gold electrodes are employed for the fabrication of the EHU. The POM pellets roll back and forth between the two electrodes under heart beating, which produces an alternating current to the external based on contact electrification (CE) and electrostatic induction. The illustration in Fig. 1c shows CE between POM and PTFE materials can be presented using the surface state model 38 . Material surface electron transfer is the main cause of CE 39 . Based on the CE theory, the basic theory of triboelectric nanogenerator was further proposed by introducing the displacement current ( ) term into Maxwell’s equation, where the polarization density is mainly due to the presence of surface electrostatic charges caused by CE. The total displacement current from Maxwell’s equation is stated as follows: where denotes the density of free conduction current density, is the electric displacement vector. represents the electric field, and is the displacement current due to time variation of the electric field. As the theoretical origin of the triboelectric nanogenerator, represents displacement current due to the movement of the changed media as driven by an external mechanical agitation or force. This term is the key to converting mechanical energy into electrical energy in the nanogenerator.
As a three-dimensional explosive view of SICP, Fig. 1d shows various unit components of the device. The integration of PMU&PM achieved the miniaturization for placing in the cylindrical cavity with an outer diameter of 6.8 mm (inner diameter of 5.8 mm) and is, respectively, connected to the tail-end energy harvesting unit and the head-end pacing electrode (Fig. 1e ). The overall length of the device was 42 mm and the volume was only 1.52 cc. To prevent reflecting the normal physiological contraction of the heart, the device combined with the heart should typically be <1–2% of the weight of the heart (<3–6 g) 40 . The overall weight of our device was only 1.75 g, which was better than the above indicators. Besides, scanning electron microscope (SEM) images of POM and PTFE before treatment by inductively coupled plasma (ICP) are shown in Supplementary Fig. 2 and Fig. 1f , respectively. The micro-nano structure on the pellet and the PTFE surface was constructed by the inductively coupled plasma technology for increasing the contact area between the pellet and the surface of the PTFE film during the movement, which will effectively enhance the electrical output performance of the device.
Working principle and electrical output performance of EHM
To understand the movement process of POM pellets more intuitively, a high-speed camera was employed to capture pellets movement trajectory, which clearly shows that pellets move sequentially from one end to the other on the side wall under the excitation of the external mechanical motion (Fig. 2a ). The working principle of EHU is shown in Fig. 2b and Supplementary Movie 1 . Under the excitation of cardiac motion, POM pellets roll back and forth on the PTFE surface. After multiple cycles of contact with POM pellets, the PTFE film was negatively charged. Due to the electret properties of the PTFE film, charges can remain on the surface. When POM pellets roll to the left, negative charges are induced on the left electrode. Subsequently, the current is generated in the loop and flows to the left electrode with POM pellets rolling back. With the periodic beating of the heart, under the combined action of gravity and supporting force, POM pellets periodically roll on the curved surface of the PTFE film to generate an alternating current based on the freestanding mode of the triboelectric nanogenerator. A triboelectric nanogenerator can be considered as a capacitor and an ideal voltage source in series. Therefore, the differential equation of triboelectric nanogenerator with an external pure resistance R can be represented by Kirchhoff’s law 41 as follows: Where denotes the transferred charge, is the triboelectric nanogenerator’s internal capacitance, and represents the open-circuit voltage.
The open-circuit voltage is proportional to the transferred charge. Furthermore, the short-circuit current can be represented as follows:
The short-circuit current also relies on the external motion velocity (movement frequency). We compared the electrical performance of EHU at different numbers of pellets and tilt angles. As shown in Fig. 2c , for the volume ratio ( V pellets / V cavity ) from 10% to 90%, the maximum value of the V oc was about 21.8 V at 40%. It was found that the overall height of the globules under a 40% volume ratio is consistent with the height of the unilateral electrode, which enables a maximum effective contact area during movement. The motion direction is consistent with the direction of the long axis of the device which was defined as 0° (Supplementary Fig. 3 ). The electrical output performance of the device gradually decreases with an increase in the angle (Fig. 2d ). The highest performance of EHU was obtained at the angle of 0°. More importantly, when the tilt angle was 30°, the output voltage of the device was maintained at >65% of the maximum output. The V oc of EHU could still reach 6.0 V when the tilt angle reached 45°. The good compliance of the device to the angle is mainly due to the design of the spherical structure for POM materials. As shown in Fig. 2e , under the optimal motion angle and volume ratio, the V oc , I sc , and Q sc of the device were about 21.8 V, 0.25 μA, and 6.4 nC, respectively. The excellent fixation effect of the device is a key factor in ensuring good electrical output in vivo. In addition, the tensile test was employed for SICP. As shown in Fig. 2f , the device can withstand a cyclic action of 1.5 N tensile force (Supplementary Movie 2 ). The isolated heart simulation experiment also shows that SICP achieves good fixation in the heart (Supplementary Fig. 4 and Supplementary Movie 3 ). The V oc and I sc of EHU show opposite trends with different load resistance (Supplementary Fig. 5a ). The EHU reached a maximum output power density of 2200 mW/m 3 at 100 MΩ (Supplementary Fig. 5b ). After 6 million stimuli cycles by a linear motor, the V oc of EHU maintained stable in compared with its initial state (Supplementary Fig. 6 ), exhibiting outstanding stability and durability, which enables for long-term harvesting biomechanical energy in vivo. In addition, the output performance of EHU was further improved when the motion frequency was increased within a certain range (Supplementary Fig. 7 ).
Characterizations for PMU&PM
We developed an integral power management unit & pacemaker module (PMU&PM) (Fig. 3a ). The length and width of a PMU&PM were 9.5 mm and 5.6 mm, respectively. The PMU&PM consisted of a rectifier bridge, capacitor, reed switch, electric pulse chip, and peripheral circuit (Fig. 3b ). The extremely narrow width design enables PMU&PM to be integrated with EHU for establishing SICP. The pulse frequency of PM can be modulated on demand (Fig. 3c ), such as 1.5 Hz, 1.8 Hz, etc., which induces myocardial contraction and regulates heart rate via pacing electrodes. Here, the output voltage and pulse width of the electrical pulses were set to 3 V and 0.5 ms (Fig. 3d ), respectively. Charging a 10 μF capacitor to 3 V by EHU can continuously power the PM to work for nearly 40 s (Fig. 3e ). Moreover, we also verified that using a capacitor with a larger capacity (47 μF) can significantly increase the duration of the pacing pulse release for the PM (Supplementary Fig. 8 ). To further explore the pacing efficiency for PM in animals, complete atrioventricular block (AVB) was induced by radiofrequency ablation (Fig. 3f ). Figure 3g shows the electrocardiogram (ECG) before and after atrioventricular node ablation in a swine model. The heart rate decreased from 96 bpm to 33 bpm, showing that the bradycardia animal model was built successfully. The PM with 1.5 Hz was used for pacing the AVB animal model. The driving voltage of the pacing chip was increased to 1.5 V, and effective pacing in AVB animals was achieved (Fig. 3h ), which is consistent with the threshold voltage of the PM of 1.5 V in large-animal models.
Biocompatibility is a critical aspect of the success of implantable bioelectronic devices, which depends on the encapsulation strategy and materials. Before implanting the device, we systematically evaluated the biocompatibility of the encapsulation material of SICP. The cytoskeletal structures and cell nucleus were detected by immunofluorescence staining on days 1, 2, and 3, respectively (Supplementary Fig. 9 ). The viability of a fibroblast cell line L929 on the encapsulation film was tested by the Cell Counting Kit-8 (CCK-8) (Supplementary Fig. 10 ). Compared with the control group, the results revealed that encapsulation material had no negative impact on cells growth and proliferation. Meanwhile, localized tissues of the skin to deep layer muscle from the implantation location of the materials were stained by Hematoxylin and Eosin (H&E). The tissues surrounding the materials of the device showed no observable differences compared with the surrounding tissue (Supplementary Fig. 11 ). Moreover, acceptable blood compatibility was required for the encapsulation materials in the present study. Hemolysis and coagulation on the materials were also demonstrated. The average hemolysis rate of the encapsulation materials was remarkably lower than the National Standards Organization (ISO) standard (5%) (Supplementary Fig. 12a ). Platelets on the material that maintained a round shape without obvious deformation and aggregation, indicating a low degree of activation (Supplementary Fig. 12b ).
Energy harvesting of SICP in vivo
To simulate the function of SICP for biomedical energy harvesting from cardiac chambers in human bodies, adult swine (age: 1.2–2.0 years; weight: 50–65 kg) were selected as our animal model. After swine were anesthetized and intubated with respirators for artificial respiration, ECG and femoral artery pressure were recorded using the data acquisition hardware (MP150, BIOPAC System, Inc.). The right-side neck skin was prepared by tincture of iodine solution, and the external jugular vein was exposed with a small incision (Supplementary Fig. 13 ). The diameter of the device was smaller than that of the external jugular vein, indicating that the device could be successfully delivery into the right ventricle through the venous system (Supplementary Fig. 14 ).
Once the vein access was obtained by the homemade introducer and dilator advancement over a guidewire (RF*GA35153M: Terumo Corporation, Tokyo, Japan), the homemade delivery catheter by integrating SICP was advanced across the tricuspid valve into the right ventricle, and SICP was delivered into the right ventricular endocardium. Figure 4a shows fluoroscopy images of minimally invasive delivery of SICP to the right ventricle via the intravenous route. The schematic diagram corresponding to the fluoroscopy images is shown in Supplementary Fig. 15 . The electrodes and PMU&PM of the device could be seen clearly through dynamic Movie (Supplementary Movie 4 and Supplementary Movie 5 ). The constructed radiopaque marker could accurately judge that the device successfully reached the right ventricle. The hook on the front end of the device interacts with the endocardium to hold the device firmly on the heart, and the helical electrode (cathode) makes good contact with the endocardium (Supplementary Movie 6 ).
After implanting the device, no significant changes in ECG and blood pressure of the animal model (Fig. 4b ). The delivery process did not affect the physiological state of the experimental animals. The cardiac motion caused the pellets to move back and forth on the surface of the PTFE membrane (Supplementary Movie 7 ). The EHU of SICP generates periodic alternating current based on a coupling effect of triboelectrification and electrostatic induction. V oc , I sc , and Q sc of SICP were about 6.0 V, 0.25 μA, 8.5 nC in vivo (Fig. 4c ). With the periodic physiological contraction and relaxation of the heart, globules move freely in SICP, and the electrical signal exhibits a certain volatility. We speculate that fluctuations in the electrical signal may also be affected by the blood flow. Nonetheless, our statistical analysis showed that the average voltage and current can reach 4 V and 0.2 μA (Fig. 4d ), respectively, and the energy conversion efficiency of the voltage exceeding 1.5 V per unit time accounted for over 82%. The output voltage EHU meets the requirement of the threshold voltage of PM. Actually, cardiac contraction intensity is also affected by respiratory status and exercise. Therefore, appropriately enhancing cardiac functional status may be beneficial to improving energy harvesting efficiency.
Illustration pacing performance of SICP in vivo
The EHU of SICP converts the biomechanical energy from heart beating to electricity, which can be stored in the capacitor of the PMU for powering the PM after the reed switch is turned on by the magnet (Fig. 5a ). That is, after the device is integrated with the heart, self-sufficiency of electrical energy is achieved. The voltage of a capacitor was charged from 0 V to 3 V within 9000 s with the same electrical output of SICP in vivo (Supplementary Fig. 16 ). Corresponding to the ECG and blood pressure, transthoracic echocardiograms also showed that no tricuspid regurgitation was observed after SICP implantation (Fig. 5b ). This result further suggests that the physical structure of the device itself has no effect on the heart. To demonstrate pacing the performance of SICP, ECG signals of the swine during SICP operation are shown in Fig. 5c . The typical P wave, QRS complex, and T wave appear in sequence on the intrinsic ECG. When SICP was in operation, the premature-paced QRS complex was induced by the electrical pulse stimulus (Supplementary Movie 8 ). The ECG showed regular-paced QRS complex occurrence ahead of the P wave (atrial contraction), indicating that the ventricle was effectively captured by SICP. The heart returned to intrinsic rhythm after the PM of SICP stopped working. The above results demonstrate the effectiveness of SICP for pacing the heart rhythm. Furthermore, the wound was sutured after implanting the device into the right ventricle, and long-term experiments were performed. As shown in Fig. 5d , e , we monitored the ECG signals of the experimental animals for 2 and 3 weeks, respectively. The results showed that the heart rhythm remained stable, and the experimental animals were eating and living normally without the occurrence of complications (Supplementary Fig. 17 ). Meanwhile, EMU was controlled by switching in the third week, and SICP effectively paced the heart in vivo, the heart rate of the animal increased from 90 bpm to 108 bpm, demonstrating that the device still maintained normal pacing function (Fig. 5f ). Three weeks after implantation, the swine was sacrificed by injection of a medium with high potassium concentration, and then the chest was opened to remove the heart with SICP. We found that the device was firmly anchored to myocardial tissues (Fig. 5g ). Masson’s trichrome stain was applied after prolonged implantation, which revealed that the extent of cardiac tissue injury and inflammatory fibrous hyperplasia occurred only at the device fixation site, and Hematoxylin and eosin (H&E) staining showed no detectable infiltration of lymphocytes in other sites, prompting the conclusion that neither humoral nor cellular rejection to the device occurred in the myocardium from the implantation site (Fig. 5h ). Overall, these in vivo tests demonstrated that SICP can achieve long-term pacing in large-animal models. | Discussion
We develop a SICP and demonstrate its efficacy and safety in the cardiac pacing of large-animal models. This device provides a promising method to harvest biomechanical energy from cardiac motion for powering the pacemaker module with the significant advantages of being leadless, battery-free, transcatheter-intervention, and lightweight. Based on the synergy between EHU and cardiac motion, the constant supply of energy enables the pacemaker to work stably, thereby preventing the perioperative risk caused by the replacement of devices due to energy depletion. In addition, to ensure cardiac physiological activity, the overall device adopts lightweight materials to reduce the load on the heart. Minimally invasive intervention with delivery technology decreases the risk of surgical exogenous infection and tissue trauma. The capsule structure facilitates implantation through the venous system. Such a capsule structure also substantially improved the energy conversion efficiency for EHU in vivo based on the freestanding triboelectric-layer mode of the triboelectric nanogenerator. The biomechanical energy harvested from SICP from each cardiac cycle is about 0.026 μJ (Supplementary Note 1 ). Maximum power output of SICP is about 0.039 μW (Supplementary Note 2 ). Theoretically, it means that the energy harvested by SICP from four heartbeats will be higher than the pacing threshold energy of a commercial leadless cardiac pacemaker (Supplementary Note 1 ).
Meanwhile, EHU is mainly constructed of polymer materials, which provides feasibility for SICP to be compatible with MRI examinations during clinical applications. Specifically, SICP has good blood and tissue compatibility and does not cause significant inflammation in the endocardium. Three weeks after the operation, the experimental animals maintained a normal survival state, and the device exhibited excellent output performance. Large-animal experimental models effectively simulate clinical applications and may provide more valuable and comparable results. Although SICP has certain limitations in long-term constant pacing on clinical criterion, this work provides a proof-of-concept demonstration for the next-generation pacemaker and will facilitate the upgrade of existing commercial leadless pacemakers (Table S1 ). Furthermore, with the in-depth follow-up research and the improvement of the efficiency of the EHU, we believe that the energy collected by SICP from one heartbeat can fully satisfy the leadless pacemaker for one pacing. Going forward, further research is required to investigate the self-powered closed-loop operation system integrating arrhythmia active monitoring and stimulation regulation. | Harvesting biomechanical energy from cardiac motion is an attractive power source for implantable bioelectronic devices. Here, we report a battery-free, transcatheter, self-powered intracardiac pacemaker based on the coupled effect of triboelectrification and electrostatic induction for the treatment of arrhythmia in large animal models. We show that the capsule-shaped device (1.75 g, 1.52 cc) can be integrated with a delivery catheter for implanting in the right ventricle of a swine through the intravenous route, which effectively converts cardiac motion energy to electricity and maintains endocardial pacing function during the three-week follow-up period. We measure in vivo open circuit voltage and short circuit current of the self-powered intracardiac pacemaker of about 6.0 V and 0.2 μA, respectively. This approach exhibits up-to-date progress in self-powered medical devices and it may overcome the inherent energy shortcomings of implantable pacemakers and other bioelectronic devices for therapy and sensing.
Harvesting biomechanical energy from cardiac motion is an attractive power source for implantable bioelectronic devices. Here, the authors report a battery-free, transcatheter, self-powered intracardiac pacemaker for the treatment of arrhythmia in large animal models.
Subject terms | Supplementary information
Source data
| Supplementary information
The online version contains supplementary material available at 10.1038/s41467-023-44510-6.
Acknowledgements
We are grateful to the laboratory members, Prof. Yubo Fan (Beihang University), Dr. Lingling Xu (National Center for Nanoscience and Technology), and Dr. Pengkang He (Peking University First Hospital, Beijing) for their cooperation in this study. This work was financially supported by grants from the National Natural Science Foundation of China (T2125003 to Z.L., 82102231 and 82372141 to Z. Liu, 61875015 to Z.L. and 82100325 to Y.H.), the National Key Research and Development Program of China (2022YFE0111700 to Z.L.), Beijing Natural Science Foundation (JQ20038 to Z.L.), Beijing Natural Science Foundation (L212010 to Z.L.), High-level hospital clinical research funding of Fuwai Hospital, Chinese Academy of Medical Sciences (No.2022-GSP-GG-11 to W.H.), the Fundamental Research Funds for the General Universities to Z. Liu, the China Postdoctoral Science Foundation (2023M731943 and BX20230169 to X.Q.), The Beijing Gold-bridge project (No. ZZ21055 to Y.H.).
Author contributions
Z. Liu, Y.H., X.Q., and Y.L. contributed equally to this work. Z. Li, W.H. and Z. Liu conceived the idea and guided the project. Z. Liu, Y.H., X.Q., and Y.L. designed the experiment and analyzed the results. Z.L.W. directed the preparation of EHU. Z. Liu, X.Q., Y.L., N.W., Z.Z. and B.S. fabricated SICP and performed the electrical characterization. Y.S., R.L., and Z. Liu performed the cell experiments. Z. Liu, Y.H., X.Q., Y.L., S.C., S.W., H.L., H.N., M.G. and Y.Y. performed in vivo surgery. Z. Liu, Y.H., X.Q. and Y.L. wrote the paper. All authors read and approved the final manuscript.
Peer review
Peer review information
Nature Communications thanks Igor Efimov, Tasneem Naqvi, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. A peer review file is available.
Data availability
The authors declare that all data supporting the results of this study are available within the paper and its Supplementary Information. The source data underlying Figs. 2c–f , 3c–e , 3g , 3h , 4b–d , and 5c–f and Supplementary Figs. 5 – 8 , 10 , 12 , and 16 are provided as a Source Data file (10.6084/m9.figshare.24715311). Any additional requests for information can be directed to, and will be fulfilled by, the corresponding authors. Source data are provided in this paper.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:57 | Nat Commun. 2024 Jan 13; 15:507 | oa_package/a6/83/PMC10787765.tar.gz |
|
PMC10787766 | 38218737 | Introduction
Plasmodium vivax malaria is a global health problem in many tropical and sub-tropical countries of the world, with > 70% of cases occurring in Asia and the Americas 1 – 3 . An effective vaccine that provides protection and prevents transmission is considered the most cost-effective tool for malaria control and would greatly facilitate P. vivax elimination. The best targets for malaria vaccine development are parasite antigens that can induce an effective immune response in natural and experimental infections that is capable of inhibiting host cell invasion and parasite development 4 , 5 . To this respect, promising candidates are parasite proteins that play an important role in targeting cell infection.
Research into P. vivax vaccine targets have focused mainly on blood stage antigens, with very few studies on pre-erythrocytic (PE) antigens. Sporozoites, the infective stage of the malarial parasite are considered ideal targets for antimalarial strategies and protective immunity 6 . Sporozoites constitute a bottleneck in the parasite complex life cycle as only a very few sporozoites are injected by an infected mosquito 7 , they have a longer exposure time to the host immune system than antigens of blood-stage invasive stages 8 , and the PE stages are clinically silent. Liver infection is an obligatory step in malarial transmission. Once injected into the skin, sporozoites actively migrate in the dermis, traverse the capillary epithelium into the bloodstream and through the liver sinusoids into the parenchyma where they invade host hepatocytes, proliferate, and develop into exoerythrocytic forms (EEFs) inside a parasitophorous vacuole. Thus, PE vaccines are aimed at targeting the sporozoites and the EEFs, thereby preventing progression of the parasite to the blood stage. In other Plasmodium spp., sporozoite antigens are considered good vaccine candidates as antibodies against these antigens represent the first line of defense against infection. Studies have demonstrated that subunit vaccines based on sporozoite surface antigens and attenuated whole sporozoites can induce protection in both animal models and humans 9 . Orthologues of sporozoite antigens from other Plasmodium spp. are also present in P. vivax and have been shown to play critical roles during hepatocyte infection. Amongst them, the P. vivax circumsporozoite surface protein (CSP), the dominant molecule on the sporozoite surface is a leading vaccine candidate and a prime target in irradiated sporozoite immunity 6 , 10 – 14 . CSP plays multiple essential functions throughout pre-erythrocytic stage development including motility, cell traversal, and liver stage development (reviewed in 15 ). Recent studies have demonstrated that CSP-based vaccines can elicit significant protection after sporozoite challenge 16 , 17 and attenuate liver-stage (LS) development 18 . This molecule forms the basis of the Plasmodium falciparum malaria vaccines (RTS,S/AS01 and R21/MM) currently authorized for use in children in endemic regions 19 – 21 . However, very limited progress has been achieved towards a P. vivax CSP based vaccine 22 – 29 . Other studies suggested protective anti-PvCSP memory B-cells are short lived due to poor immunogenicity 30 , 31 . Thus, there is a need to identify other potential candidates to partner with CSP in a multivalent vaccine to protect against infection and disease.
The cell-traversal protein for ookinetes and sporozoites (CelTOS) is another PE antigen, which is highly conserved among Plasmodium spp. 32 and considered an attractive vaccine candidate. It plays a critical role in sporozoite egress from host cells during traversal 33 – 36 , which is a necessary part of ookinete infection of the mosquito midgut and sporozoite infection of the liver. P. vivax CelTOS is naturally immunogenic 37 , 38 and immunization of mice with recombinant CelTOS elicits both humoral and cellular immune responses that reduced hepatocyte infection 34 , 37 , 39 – 52 . Antibodies targeting CelTOS can inhibit gliding motility, cell traversal, sporozoite hepatocyte infection, and impaired parasite development in the mosquito 39 , 52 . Parasites lacking expression of CelTOS are defective in mosquito and liver stage development 33 , suggesting its importance as both a transmission blocking and PE vaccine target.
In a recent study on transcriptional profiling of P. vivax sporozoites, we identified additional P. vivax sporozoite antigens including the sporozoite surface protein 3 (SSP3) and sporozoite protein essential for cell traversal (SPECT1), which are upregulated in response to changes in their microenvironment and are associated with host cell infectivity 53 . In the rodent malaria parasite Plasmodium berghei , SSP3 is predominantly located on the sporozoite surface and is shown to play a role in gliding motility and/or liver stage development. SPECT1 plays a role in pore formation, which is essential for traversal of the liver sinusoid and Küpffer cells 49 , 54 – 56 .
In this study, we evaluated the naturally acquired antibodies to P. vivax PE antigens CSP (VK210 allele), CelTOS, SSP3 and SPECT1 in plasma samples from vivax infected patients from two endemic regions in Thailand. At least 80% of the study subjects had antibodies to all four antigens and these antibodies inhibited sporozoite infection and hepatocytes development in vitro. Understanding the magnitude and quality of naturally acquired antibody responses to these sporozoite antigens in endemic populations will guide target selection for inclusion in a multi-valent PE vaccine aimed at generating protective antibodies against sporozoites infection and development in hepatocytes, preventing blood stage infection and disease. | Materials and methods
Ethics statement
This study was approved by the Committee on Human Rights Related to Human Experimentation, Mahidol University Central Institutional Review Board (MU-IRB 2012/079.2408 and MU-CIRB 2021/281.2505), and the Committee on Use of Human Subjects in Research, University of South Florida Institutional Review Board (IRB-Pro00018143). All the study participants or their legal guardians provided written informed consent.
Study population and sample collection
A total of 51 plasma samples were obtained from acute symptomatic P. vivax-infected patients in an area of low transmission in the provinces of Ranong (n = 23) and Chumphon (n = 28) in southern Thailand. In these regions, both P. vivax and P. falciparum infections are common, with an average of 40 vivax malaria cases per year based on 2017–2022 data from the Thai Department of Disease Control, Ministry of Public Health. Samples were collected during the rainy season, between August and December, which coincides with the high transmission season. Self-reporting study participants were screened for vivax malaria by microscopy of Giemsa-stained smears and confirmed by PCR. Only individuals aged 18 and above, and positive for P. vivax infection were recruited for this study. Other inclusion criteria included a systolic blood pressure greater than 90 mmHg, body temperature less than 40 °C, and hematocrit greater than 25%. Blood samples were collected from patients in heparinized tubes and the plasma was separated and stored frozen until needed. Plasma samples from six naïve North American volunteers were used as controls.
Recombinant protein production
Recombinant proteins were produced from four P. vivax pre-erythrocytic (PE) vaccine candidates: SPECT1 (PVP01_1212300), SSP3 (PVX_123155), CelTOS (Sal1, PVX_123510) and CSP (PVP01_0835600.1). For CSP and SPECT1 the signal peptides and transmembrane domains were excluded in expressed recombinant proteins, while for SSP3, only the N-terminal region from residues 19-203 was expressed 72 . The gene coding for CSP was codon optimized for Escherichia coli expression and cloned into pET21(a +) expression vector with a C-terminal hexahistidine tag. Recombinant CSP was expressed in E. coli BL21 Star (DE3) strain and purified on HisTrap HP by affinity chromatography using the Akta Pure System (GE). The production of rSSP3 and rSPECT1 was previously reported 72 . Recombinant CelTOS was expressed in E. coli , purified by Nickel-NTA chromatography and gel filtration as previously reported 34 , 35 , 73 , 74 . The purity of recombinant proteins was evaluated by SDS-PAGE.
Assessment of antibody response to P. vivax PE antigens
Plasma samples from vivax-infected patients (n = 51) were screened for antibody responses to recombinant P. vivax antigens CSP, CelTOS, SSP3, and SPECT1 by indirect ELISA. Briefly, 96-well microtiter plates were coated overnight at 4 °C with 100 μl/well of recombinant proteins at 3 μg/ml in PBS. Coated plates were washed with PBS/0.05% Tween-20 (PBS-T) and blocked with 5% (w/v) skimmed milk in PBS-T for 2 h at room temperature. Plasma samples diluted 1:200 with 2% milk in PBS-T were added to duplicate wells and incubated on a shaker for 2 h at room temperature. Six naïve North American plasma samples were tested on each plate as negative controls, and wells without coated antigens were used for background control. After another PBS-T wash, wells were incubated with a phosphatase-labelled goat anti-human IgG (H + L) antibody (SeraCare) for 90 min at room temperature. Bound antibodies were detected after development with 100 μl of phosphatase substrate solution (SeraCare) and optical density (OD) values were measured at 650 nm. Antibody responses were reported as Reactive Index (RI) 75 . RI was calculated by dividing the OD of test sample by a cut-off value determined for each plate. Cut-off value = Mean OD values + 2 SD of 6 naïve North American control samples. RI = (OD TS –OD BK )/(Mean OD CS + 2 SD), where TS = test sample, BK = blank, CS = control samples, and SD = standard deviation. A sample with RI ≥ 1 (cut-off value) was considered a responder, while RI < 1 is considered a non-responder.
Animal studies
Female Swiss webster ND4 mice (6–8 weeks; Envigo, USA) were used in the current study and all animal procedures were performed in compliance with relevant guidelines and regulations in IACUC protocol IS00008179 approved by the Division of Research Integrity and Compliance, University of South Florida. Mice were housed in ventilated cages and maintained at 21 °C with 12:12 h light–dark cycles at a relative humidity of 55 ± 10%. All animal studies are reported following ARRIVE guidelines 76 .
Parasites
The following P . berghei ANKA– P. vivax transgenic parasite lines were used: (i) 2321cl3, Pb-Pv CelTOS(r) where endogenous P. berghei CelTOS is replaced with P. vivax CelTOS (RMgm-4111, www.pberghei.eu ) 50 . (ii) 3378cl1, Pb-Pv SPECT1(r) where endogenous P. berghei SPECT1 is replaced with P. vivax SPECT1, (iii) 3392cl1, Pb-Pv SSP3(r) where endogenous P. berghei SSP3 is replaced with P. vivax SSP3 and (iv) Pb05cl1, Pb-Pv CSP P01(r) where endogenous P. berghei CSP is replaced with P. vivax CSP from P01 reference strain (Kolli SK, et al., manuscript in preparation). Pb-Pv CelTOS(r) and Pb-Pv SPECT1(r) transgenic lines express gfp-luciferase fusion reporter gene under the control of constitutive Pbeef1α promoter integrated into the neutral 230p gene locus 77 . Pb-Pv SSP3(r) and Pb-Pv CSP P01(r) express mCherry and luciferase reporter genes under the constitutive Pbhsp70 and Pbeef1α promoters, respectively integrated into the neutral 230p gene locus 78 .
Cell lines
Human hepatocyte cell line HC-04 (MRA-975) obtained from BEI resources were cultured in Minimum essential medium (Gibco) and Ham’s F12 nutrient mix (Gibco) supplemented with 10% heat-inactivated fetal bovine serum (GenClone), 30 mM HEPES (Gibco), 2 mM L-Glutamine (Gibco) and 40 μg/ml Gentamicin (Sigma-Aldrich) and maintained at 37 °C with 5% CO2 in a collagen coated flask (Corning). The cell line was tested negative for mycoplasma contamination (Invitrogen) and authenticated via American Type Culture Collection Human STR Profiling Service (ATCC).
Production of transgenic P. berghei sporozoites
Groups of female Swiss Webster ND4 mice (n = 2), were infected with a cryopreserved stock of respective P. berghei transgenic parasites line intraperitoneally. At 2–5% parasitemia and comparable gametocytemia, three to five days old female Anopheles stephensi mosquitoes were allowed to feed on the mice that were under anesthesia for 15–20 min. The mice were then euthanized with CO 2 after mosquitos’ infection. Infected mosquitoes were maintained at 21.5 °C and 80% relative humidity and supplied with 5% glucose ad libitum. Infectivity of mosquitoes was assessed by counting the number of oocysts on day 14 post feeding. Sporozoites were isolated by manual dissection of mosquito salivary glands on day 18–21 post feeding and collected in Leibovitz’s L-15 medium (Thermos Fisher Scientific). The glands were centrifuged at 6000 rpm for 1 min and mechanically disrupted using a plastic pestle to release the sporozoites. The crushed sporozoite solution was filtered through a 40 μM cell strainer (Greiner Bio-One) and sporozoites were counted using a hemocytometer.
Inhibition of liver stage development assay (ILSDA)
Collagen coated 384 well plate (Greiner Bio-One) was seeded with HC-04 cells at a density of 8000 cells/well for 16 h before infecting with transgenic P. berghei sporozoites. Sporozoites (75 spz/μl) were incubated in 1:100 dilution of patient plasma or North American naïve plasma for 20 min at room temperature. After incubation, 3 × 10 3 sporozoites per well were added to triplicate wells. The plate was centrifuged at 200 × g for 5 min and incubated at 37 °C in CO 2 incubator to allow hepatocyte invasion. After 1 h, culture media was changed to remove uninvaded sporozoites and a subsequent media change 24 hpi. At 48 hpi, EEFs of transgenic lines expressing mCherry were imaged using live fluorescence and the nuclei stained with Hoechst 33342 for 30 min at 37 °C. Images were acquired using high content imaging system, Cell Insight CX7, at 20X objective.
The EEFs of transgenic lines expressing GFP were fixed with 4% PFA and EEFs were imaged after performing an indirect immunofluorescence assay as previously described with minor changes 63 . Briefly, the PFA from the wells was washed off twice with PBS and the parasites stained by incubation with goat polyclonal P. berghei UIS4 antibody (LS Bio) at 1:1000 dilution in blocking buffer (1% BSA and 0.3% Triton X-100) at 4 °C overnight. The wells were washed twice with PBS and counter stained with an anti-goat Alexa Fluor 594 conjugated secondary antibody (Invitrogen) and Hoechst 33342 (Invitrogen) to stain the nuclei for 1 h at 37 °C. The wells were again washed with PBS and incubated in fresh PBS. Images were acquired using high content imaging system, Cell Insight CX7, at 20X objective. The images were exported, and parasites counted using an in-house Python program (Supplementary Methods Text S1 ).
Live fluorescence imaging of mCherry expressing EEFs were counted using parasite cytoplasmic (PCP) staining whereas GFP expressing parasites were counted using UIS4 positive staining of parasitophorous vacuolar membrane (PVM), and Hoechst 33342 for host nuclei staining. Two independent assays were performed for each serum.
% inhibition of infection = 100–(Mean EEFs in test sample divided by Mean EEFs in 6 naïve North American samples) × 100.
Statistical analysis
All data were tested for normality using the Anderson–Darling test before analysis. Statistical differences for ELISA data were determined using the Kruskal–Wallis non-parametric test with a Dunn’s multiple comparison adjustment. Statistical differences for sporozoite invasion inhibition were also determined with the Kruskal–Wallis non-parametric test with a Dunn’s multiple comparison adjustment. A Spearman Correlation was done on the overall percent inhibition for each antigen. Analysis was performed using GraphPad Prism v. 10.0.2 for MacOS. | Results
Naturally acquired antibodies to P. vivax PE antigens
Plasma samples from 51 Thai patients with acute P. vivax infections were screened for the presence of antigen-specific antibodies to selected PE antigens including CSP, SSP3, SPECT1 and CelTOS. There was considerable variation in magnitude of IgG antibody responses in individual patient samples against the different antigens, with significant differences in overall response observed between SSP3, CelTOS and SPECT1 (Fig. 1 a). Based on the reactivity profiles, antigen specific responses could be categorized into three response groups: high responders (HR), defined as samples with antibody reactivity index (RI) greater than the mean reactivity of all samples per antigen, low responders (LR), with RI less than the mean reactivity of all samples but greater than the cut-off value (RI = 1) and non-responders (NR) with RI less than or equal to the cut-off value (Fig. 1 b). Out of the 51 samples screened, 25 (49%) of them were HR for CSP and 25 (49%) for SPECT1. Similarly, SSP3 and CelTOS had the same number of HR at 19 (37.3%). However, these were not the same samples for each antigen. A total of 17 (33.3%), 30 (58.8%), 22 (43.1%) and 31 (60.8%) of samples were LR for CSP, SSP3, SPECT1 and CelTOS, respectively. Independent of their responder classifications, about 80% plasma samples had antibodies to all four antigens, 12% to three antigens, 6% to two antigens and no sample had antibodies to a single antigen. Only 2% of samples did not have antibodies to any of the four antigens (Fig. 1 c). The prevalence of antigen specific antibodies was 98% for CelTOS, 96.1% for SSP3, 92.2% for SPECT1 and 82.4% for CSP (Fig. 1 c).
Antibodies in plasma of Thai patients inhibit sporozoites invasion of hepatocytes
The potential protective effect of naturally acquired antibodies to these antigens on sporozoite invasion of hepatocytes was evaluated by an in vitro inhibition of liver stage development assay (ILSDA). This assay is performed in 384-well plate, and makes use of a human hepatoma cell line (HC-04) and transgenic P. berghei sporozoites expressing the different P. vivax PE antigens to support high level liver infections and parasite development to blood stage breakthrough 18 . Antigen specific inhibitory antibodies were present in the patient samples and inhibited invasion of transgenic sporozoites into hepatocytes. There was a wide variation and significant differences in the levels of antigen specific inhibitory antibody responses of individual plasma samples , with antigen inhibition ranging from 0 to 50% at the tested plasma dilution of 1:100 (Fig. 2 a). The highest inhibitory effect was observed with anti-CelTOS antibodies, with mean percent inhibition ranging from 11 to 46.6% (mean = 27.3%), followed by SPECT1 with 4–44.6% (mean = 20.2%), SSP3 with 0–31.1% (mean = 13.9%) and CSP with 0–48.6% (mean = 14.7%). Spearman pairwise correlation of antibody inhibition between antigens demonstrates a moderately significant correlation between anti-SSP3 and anti-SPECT1 antibodies (r = 0.62 and p < 0.0001) as well as anti-SPECT1 and anti-CelTOS antibodies (r = 0.65, p < 0.0001), while a relatively weaker correlation albeit statistically significant was found between anti-SSP3 and anti-CelTOS (r = 0.41, p = 0.003) (Fig. 2 b). No correlation of inhibition was observed between anti-CSP antibodies and antibodies against SSP3, SPECT1 or CelTOS. There was no correlation between antibody titer (RI) and inhibition of sporozoite invasion into hepatocytes (Supplementary Fig. 1 ). | Discussion
Characterization of naturally acquired antibody responses to P. vivax antigens following vivax infections is an important step towards target selection and rational vaccine design to protect against vivax malaria. Naturally acquired IgG antibody responses to P. vivax antigens has focused mainly on blood-stage antigens rather than pre-erythrocytic antigens 57 , 58 . Plasmodium sporozoites injected by the mosquito migrate to the liver where they infect hepatocytes and develop into liver-stage forms or exoerythrocytic forms (EEFs) 49 , 59 . Since sporozoite antigens play a critical functional role during this migration process, they are considered potential vaccine targets. To determine if PE antigens are naturally immunogenic and could induce protective antibody responses in residents of endemic regions, quantitative and qualitative analysis of IgG antibody responses to four P. vivax PE antigens: CSP, SSP3, SPECT1 and CelTOS were evaluated in 51 plasma samples from acute symptomatic vivax infected patients from a low endemic region of Southern Thailand. Antigen specific antibodies were prevalent in the study subjects, with considerable heterogeneity in magnitude of antibody responses (Fig. 1 ). Over 80% of the patients had antibodies to all four antigens, suggesting that epitopes of these antigens are immunogenic and commonly exposed to the host immune responses during natural infections. However, the lack of correlation between the antibodies’ titer and level of functional inhibition suggest epitope specificity and strain variation may be important variables in development of protective immune antibodies.
Anti-CelTOS specific antibodies had the highest frequency, with 98% seropositivity, followed by anti-SSP3, anti-SPECT1 and anti-CSP antibodies with 96%, 92% and 82% respectively. In a low endemic region in Western Thailand where a majority of the study population was not infected, CelTOS was not substantially immunogenic 57 , while in the Brazilian Amazon, only 17.8% (94/528) of study subjects had specific CelTOS IgG antibodies 37 . This later study demonstrated that high antibodies against CelTOS were driven by cytophilic antibodies, suggesting that antibody response to CelTOS might be associated with recent infections 37 , which could explain the high prevalence of anti-CelTOS antibodies in our study. However, Longley et al. 57 demonstrated that anti-CelTOS antibodies could last up to 1 year in the absence of P. vivax infection.
We also demonstrated the prevalence of naturally acquired anti-CSP antibodies in our study subjects. Other studies also reported the prevalence of anti-CSP antibodies in other endemic regions, some of which could last between 5 and 12 months even in the absence of detectable exposure to P. vivax infection 57 , 60 – 62 , with most of the antibody response biased towards the immunodominant central repeat region. The other P. vivax PE stage antigens, SSP3 and SPECT1, are also known to play important roles in sporozoite migration and infectivity. Most data on SSP3 and SPECT1 are based on studies with the rodent malaria parasite P. berghei 55 , 56 . Currently, there are no published data on the acquisition of naturally acquired immunity to P. vivax SSP3 and SPECT1 in endemic regions. Here, we showed that these two antigens are naturally immunogenic, with SSP3 showing significantly higher antibody responses than SPECT1 in the study subjects (Fig. 1 ).
The association between patient age and the presence of antibodies to these PE antigens was evaluated to determine if there was a correlation between the acquisition of naturally acquired immunity to malaria and age. The Median (interquartile range; IQR) age of participants was 32 (20–44) years old. No correlations were found between antibody levels to these antigens and the age of study participants (Supplementary Fig. 2 ), although future studies with a larger study population will be needed to validate this finding. Nonetheless, these data mirror studies with blood-stage antigen PvAMA-1 63 and liver-stage antigen, PvTRAP 64 , which demonstrated no correlations between age and antibody levels in individuals with acute symptomatic P. vivax infection.
Orthologues of these PE stage antigens have been shown to play important roles in gliding, traversal, and invasion in other Plasmodium species 49 , 54 , 65 – 67 . Antibodies to PfCelTOS inhibited sporozoite motility and hepatocyte invasion in mice immunizations and sterile immunity in a heterologous P. berghei sporozoite challenge 39 , 41 . Anti-PfCSP human antibodies are also associated with protective immunity 68 . In P. vivax, monoclonal antibodies to CSP not only block sporozoite entry but also could inhibit subsequent development within the hepatocytes in vitro as evidenced by abnormal EEFs 69 . To determine the potential vaccine efficacy of antibodies to the P. vivax PE antigens, we evaluated naturally acquired antibodies in patient samples for inhibition of sporozoites invasion of human hepatocytes by ILSDA. This assay platform has been validated for evaluating inhibition of liver stage developmental and to assess the specificity and sensitivity of both P. falciparum and P. vivax antibodies 6 , 18 , 70 . We showed that naturally acquired antibodies to these PE antigens in patient samples inhibited transgenic P. berghei sporozoites expressing the different PE stage antigens from invading hepatocytes in vitro (Fig. 2 ), suggesting the presence of functional epitopes of naturally acquired protective antibodies on these antigens. Overall, the highest inhibitory responses were against CelTOS, followed by SPECT1. Although a few individuals had high inhibitory antibodies against CSP and SSP3, most of them had either very low or no inhibitory antibodies at all. It should be noted that of the three naturally occurring P. vivax CSP variants, which differ in the immunodominant repeat regions, only the CSP-VK210 variant was assessed in this study, which might account for the high rate of non-responders and low or non-inhibitory antibodies against CSP if the repeat regions are the primary targets of inhibitory antibodies. Thus, there is a possibility that some of the observed non-responders and non-inhibitory samples of CSP could be responders and inhibitory against a different CSP variant. In the case of SSP3, it could be possible that the use of a truncated SSP3 protein (amino acids 19-203) may have resulted in a low estimate for the SSP3 titers.
Interestingly, a sample with high inhibitory antibodies (% inhibition > 30%), against one antigen did not necessarily show high inhibition against the other antigens. Only 1 individual had high inhibitory antibodies against all three antigens (CelTOS, SPECT1, SSP3), 2 individuals against two antigens (CelTOS, SPECT1 or SSP3) and no individual with inhibitory antibodies against all four antigens. There was no correlation between high titer anti-CSP inhibitory antibodies and inhibitory antibodies to any of the other three antigens. Although other parasite factors such as the specific strain of the parasite and host immune factors including previous exposure play a role in the development of immunity against these different antigens, our data suggest that the acquisition of functional antibodies against each antigen is independent of the other and support the need for identifying potential PE antigens to partner with CSP in a multivalent vaccine targeting different PE antigens. Thus, identifying parasite antigens that are targets of naturally acquired inhibitory antibodies are ideal candidates for inclusion in such a vaccine to prevent hepatocyte infection and the onset of clinical disease by blocking the progression of the parasite in the liver to blood-stage breakthrough.
In summary, we showed that naturally acquired antibodies to the selected P. vivax PE antigens were prevalent in the study subjects from Thailand, but individuals had significant quantitative and qualitative differences in their antigen-specific antibody responses. Secondly, we demonstrated that there was no correlation between antibody titer (defined by RI), age of patients, and functional inhibitory activity against each antigen. This finding mirrors results previously reported for the Duffy binding protein (DBPII), which is a leading P. vivax blood-stage vaccine candidate 71 . This study is the first to investigate the functional activity of naturally acquired antibodies to these four P. vivax PE antigens. Together, our data highlights the potential of these PE antigens as vaccine targets and supports their inclusion in vaccine designs aimed at targeting sporozoite invasion of hepatocytes and liver stage development. Preventing parasites from developing in the liver and progressing to the blood stage will not only prevent the development of clinical disease and morbidity but also prevent transmission. Additional studies are needed to investigate the dynamics involved in the acquisition of antigen-specific antibodies and the frequency of vivax infections in the study subjects. Further studies are required to assess the magnitude, longevity and factors affecting the acquisition of antigen-specific antibodies with a larger cohort of patients. | In Plasmodium vivax , the most studied vaccine antigens are aimed at blocking merozoite invasion of erythrocytes and disease development. Very few studies have evaluated pre-erythrocytic (PE) stage antigens. The P. vivax circumsporozoite protein (CSP), is considered the leading PE vaccine candidate, but immunity to CSP is short-lived and variant specific. Thus, there is a need to identify other potential candidates to partner with CSP in a multivalent vaccine to protect against infection and disease. We hypothesize that sporozoite antigens important for host cell infection are considered potential targets. In this study, we evaluated the magnitude and quality of naturally acquired antibody responses to four P. vivax PE antigens: sporozoite surface protein 3 (SSP3), sporozoite protein essential for traversal 1 (SPECT1), cell traversal protein of ookinetes and sporozoites (CelTOS) and CSP in plasma of P. vivax infected patients from Thailand. Naturally acquired antibodies to these antigens were prevalent in the study subjects, but with significant differences in magnitude of IgG antibody responses. About 80% of study participants had antibodies to all four antigens and only 2% did not have antibodies to any of the antigens. Most importantly, these antibodies inhibited sporozoite infection of hepatocytes in vitro. Significant variations in magnitude of antigen-specific inhibitory antibody responses were observed with individual samples. The highest inhibitory responses were observed with anti-CelTOS antibodies, followed by anti-SPECT1, SSP3 and CSP antibodies respectively. These data highlight the vaccine potential of these antigens in protecting against hepatocyte infection and the need for a multi-valent pre-erythrocytic vaccine to prevent liver stage development of P. vivax sporozoites.
Subject terms | Supplementary Information
| Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-024-51820-2.
Acknowledgements
We thank the residents of Ranong and Chumphon provinces of Thailand for providing the plasma samples that were used for this study. The following reagent was obtained through BEI Resources, NIAID, NIH: HC-04, Hepatocyte (human), MRA-975, contributed by Jetsumon Sattabongkot Prachumsri.
Author contributions
Study design; J.H.A., manuscript draft; F.B.N., performed experiments; F.B.N., S.K.K., P.A.S., J.N., M.M.O., data analysis; S.J.B., python code; B.B., provided materials; N.H.T., N.D.S., P.T., P.C. All authors reviewed the manuscript.
Funding
This study was supported by the National Institutes of Health grant (UO1: 5U01AI155361-0 (to J.H.A.). N.H.T. and N.D.S are supported by the Intramural Research Program of the National Institute of Allergy and Infectious Diseases, National Institutes of Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Data availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Competing interests
N.H.T., N.D.S. and J.H.A. are inventors on patent number US-20190276506-A1 related to this work. The remaining authors declare that they have no competing interests. | CC BY | no | 2024-01-15 23:41:57 | Sci Rep. 2024 Jan 13; 14:1260 | oa_package/1b/ad/PMC10787766.tar.gz |
|
PMC10787767 | 38218968 | Background & Summary
Repeat Terrestrial Laser Scanning (TLS) topography measurements were part of the Multidisciplinary drifting Observatory for the Study of Arctic Climate (MOSAiC) expedition, in which researchers aboard R/V Polarstern 1 drifted with and studied the same collection of ice floes in the Central Arctic from October 2019 to May 2020 2 – 4 . Arctic sea ice has grown dramatically younger and thinner in recent decades 5 and the overall objectives of MOSAiC were to understand the causes and consequences of this ‘new Arctic’. To do so, researchers were divided into teams studying the snow and ice (including on-ice and satellite remote sensing) 2 , atmosphere 3 , ocean 4 , ecosystem, and biogeochemistry. We conducted the TLS measurements as part of the ice team, for the primary purpose of quantifying snow accumulation and redistribution. Other applications of these data include observations of ice dynamics; surface roughness for ice-atmosphere interactions; and providing context for atmospheric observations, remote sensing instruments (e.g., on-ice radars 6 ), autonomous buoys, snow pit measurements, and more.
Snow substantially affects the Arctic sea ice mass balance due to its opposing impacts of insulating in the winter (restraining ice growth) and reflecting shortwave radiation in the summer (protecting against ice melt 7 ). Three of the four most important uncertainties impacting September sea ice volume in the CICE sea ice model 8 are related to the thermal and optical properties of the snow 9 . Snow spatial variability due to wind driven snow redistribution impacts these properties 10 , 11 . However, redistribution is challenging to measure due to this spatial variability. Repeat observations of snow changes on a substantial area of the same piece of ice are needed to observe these redistribution processes and understand their mechanisms. Furthermore, the magnitude of snow accumulation can be very small. For example, the largest snowfall event on MOSAiC precipitated just 1.6 cm water equivalent 12 . Thus, our observations of changes must be highly accurate. Finally, to observe snow accumulation and redistribution throughout the winter, measurements must be feasible in polar night.
TLS is routinely used to make highly-accurate measurements of snow accumulation on areas of 40 m 2 to 0.5 km 2 13 – 17 . The instrument is a laser scanner that is mounted on a ∼ 2.2 m tall tripod. Due to the generally flat topography of sea ice and shadowing, the instrument collects topographic information up to 100–200 m from itself. To observe a larger area, we relocate the tripod and collect measurements from different locations, which we then co-register into a common reference frame. TLS measurements on Arctic sea ice face unusual challenges and our procedures for the MOSAiC Expedition were informed by prior experiments, including: the Seasonal Ice Zone Observing Network project 18 ; the Snow, Wind, and Time project 17 ; and the Sea Ice Dynamics Experiment. First, the typical temperatures of −15 to −35 °C are below the operating range of commercially-available TLS instruments. We addressed this issue with a custom-designed heater case. Second, TLS measurements are typically aligned in a geodetic reference frame via highly-accurate GNSS measurements. However, on drifting sea ice, the relevant reference frame for snow processes is a lagrangian reference frame fixed to the surface of the ice. We developed a custom software package—pydar 19 —to align repeat TLS measurements into this lagrangian, ice-fixed reference frame. To verify that our alignment achieved the necessary vertical accuracy for snow accumulation, we statistically validated it through comparison with in-situ measurements 20 . The quantitative results from our alignment validation are specific to this dataset. Finally, the TLS data contain potentially-useful information that are irrelevant to our needs (e.g., sub-cm snow surface roughness in areas near the scanner, backscatter reflectance, roughness relevant to aerodynamic drag, etc...). We hope that future researchers will use these data for purposes that we have not imagined. To facilitate this future usage, we have designed pydar to preserve the full scope of the data and to make our data processing decisions transparent to future researchers.
The primary purpose of this manuscript is to describe the TLS data collected at MOSAiC. However, given the unique challenges of using TLS on drifting sea ice, some discussion of lessons learned and future methodological developments is warranted. Prior experience on sea ice near Utqiaġvik, AK 17 , 18 found that spacing scan positions between 150 and 200 m apart generally produced acceptable data. However, the ice at MOSAiC was rougher than those experiments, and the placement of scan positions was also restricted due to not trespassing in sensitive measurement sites. Under these constraints, we found that spacing scan positions around 100 m apart produced better data, although we sometimes prioritized having measurements co-located with complementary measurements sites (e.g., the snow and ice thickness transects) over achieving the best TLS coverage. Additionally, the alignment procedures and validation were developed after the expedition, thus the alignment validation relies on what coincident in-situ measurements were available and had not experienced blowing snow events between the in-situ measurement and the TLS acquisition. The statistical validation methods presented herein are applicable for future campaigns, but the specific quantitative results depend on the measurements. Therefore, validation should be conducted for any TLS campaign on sea ice. For future campaigns, we strongly recommend including more in-situ point measurements of snow surface changes coincident with the TLS measurements. One expeditious approach for this would be to include a small array of snow thickness stakes near each reflector post. This would ensure that the in-situ measurement sites were visible from multiple scan positions, and the snow surface measurements could be made quickly while distributing the reflectors at the start of a TLS measurement day.
This manuscript describes the repeat TLS data collected on the MOSAiC expedition from October 2019 to May 2020 21 . We present the philosophy of the data processing, and it’s implementation in pydar 19 . Finally, we validate the vertical alignment and discuss considerations for reuse of these data. | Methods
Terminology
We use the following terms throughout this manuscript. They are mostly drawn from their usage in RiSCAN (Riegl’s software for acquiring and processing TLS data): Scan Position: set-up the tripod at a given location and measure the topography within the scanner’s line of sight. SingleScan: the data collected from a single scan position. We use this term to refer to both the point cloud of topographic measurements from this scan position and ancillary data such as the locations of TLS reflectors within the scanner’s reference frame at this scan position and the rigid transformations that register and align this SingleScan with others (see below). Project: a collection of SingleScans covering a contiguous area that were collected during a sufficiently short time interval such that no topographic change occurred between scan positions (sometimes ice deformation occurred during a Project, these exceptions are described in the Usage Notes). Typically, the set of SingleScans in a Project were collected in a single day of measurements although on some occasions measurements were collected over two days. Registration: the act of computing the rigid transformations that describe the spatial relationships between the different SingleScans in a Project. Registration places all SingleScans in a Project into a common reference frame (whose origin and unit vectors are typically defined by an arbitrary SingleScan). Scan Area: a region of ice whose topography we measured over time with a succession of Projects. Alignment: the act of computing the rigid transformations such that SingleScans from different Projects in the same Scan Area are in a common reference frame. Alignment is necessary to precisely locate topographic changes (e.g., how much snow accumulation occurred at specific location on the ice from 4 January to 11 March).
Data collection
We used a Riegl VZ1000, which has an eye-safe, near infrared laser (1550 nm). For each scan position, we mounted the scanner on a tripod, and the scanner rotated on vertical and horizontal axes to create a point cloud of its surroundings. The scanner was controlled via WiFi from a field laptop. The angular stepwidths in the azimuthal and vertical directions were each 0.025° and it took approximately 8 minutes to acquire a point cloud with a Laser Pulse Repitition Rate of 300 kHz. The origin and unit vectors of the point cloud are defined relative to the scanner and this reference frame is named the Scanner’s Own Coordinate System (SOCS). Air temperature was typically below the VZ1000’s minimum operating temperature, so we placed the scanner in a custom-designed heater case (Fig. 1 ) to maintain its temperature within acceptable bounds. Due to the overall flat topography, occlusions, and low reflectivity of snow and ice at 1550 nm, the scanner collects useful data up to 100–200 m from its location. To map a larger area, on each measurement day we relocated the scanner to additional scan positions and acquired subsequent measurements (i.e., we collected a SingleScan at each scan position), which we then linked together into a Project. Choosing the locations for scan positions was a trade-off between maximizing the area covered, co-locating with other measurements, minimizing shadows within the measured area, and not trespassing in sensitive measurement sites. In order to make a complete map, we registered the SingleScans measured from each scan position into a common reference frame. We placed Riegl 10 cm cylinder reflectors on posts frozen into the ice (Figs. 1 , 2 ), and used them to locate and orient each SingleScan in the Project’s common reference frame (described below in Data Processing in RiSCAN). Typically between 4 and 10 reflectors were visible from each scan position. These reflectors also served as the starting point for aligning Projects collected in the same Scan Area on different days into a lagrangian, ice-fixed reference frame.
Scan areas
Repeat TLS observations on MOSAiC were focused on three primary Scan Areas: Snow1, Snow2, and ROV (Fig. 2 and Table 1 ). These Scan Areas were selected to observe a variety of ice topography, to co-locate with other measurements, and to be logistically accessible. In March, ice dynamics caused the Met City atmospheric measurement site 3 and the on-ice Remote Sensing measurement site 2 to be removed from the Snow1 scan area (where they had been located from October through February). Additional TLS measurements were made at these sites (which we labelled ‘RS’ for Remote Sensing and ‘MET’ for Met City; Table 1 ). Below are descriptions of each Scan Area.
Snow1 Scan Area
The Snow1 Scan Area (hereafter we use ‘Snow1’ to refer specifically to the TLS measurement area) was generally off the bow and port side of Polarstern and composed of residual ice of which only the upper 30 cm was solid when we arrived in October 22 , refrozen melt ponds, first year ice in refrozen leads, and first year ridges. We first measured Snow1 on 18 October and our final measurement was on 3 May. Ice dynamics caused frequent (multiple times per month) crack and ridge formation in the Scan Area. These dynamics resulted in substantial variation in the area of ice measured and the number of SingleScans collected in each Project. Snow1 was co-located with the SLoop snow and ice thickness transect 23 , the Snow1 snow sampling area 2 , the BGC3 ice coring area, the Met City atmospheric measurement site 3 (18 October to 28 February), the first on-ice Remote Sensing site 2 (18 October to 15 November), the second on-ice Remote Sensing site 2 (6 December to 28 February), the Ocean City oceanic measurement site 4 (18 October to 6 December), the Bow Stakes mass balance site 20 (6 December to 28 February), the Met Stakes mass balance site 20 (18 January to 3 May), the Stakes3 mass balance site 20 (6 December to 26 April), the second Remotely Operated Vehicle site 2 (1 November to 6 December), and a number of vibrating wire ice stress gauges 2 . The given dates are the first and last dates that the installations were present in the TLS data, not necessarily the installation or decommission dates of the installations.
Snow2 Scan Area
The Snow2 Scan Area (hereafter we use ‘Snow2’ to refer specifically to the TLS measurement area) was generally off the bow and starboard side of Polarstern beyond the logistics area 2 and was composed of refrozen melt ponds and an approximately 1 m tall (on average) second year ridge. No ice thickness measurements were made in Snow2, but the residual ice was likely thicker than that in Snow1. Similar level ice in the NLoop transect had a modal thickness of approximately 75 cm in early November 23 . We first measured Snow2 on 6 November and our final measurement was on 9 May. Snow2 was the most stable region observed. The core of the measurement area was not deformed from 6 November 9 May with the exception of a 1-m-wide crack that formed between 13 November and 6 December. The measurement area progressively shrank due to ice dynamics removing areas in November, March, April, and May. On two occasions—6 December and 6 February—we made measurements on Snow1 and Snow2 as part of the same Project. Snow2 was co-located with the Snow2 snow sampling area 2 , an atmospheric flux chamber measurement site, a section of the NLoop snow and ice thickness transect 23 (6 November to 6 December), the Stakes1 mass balance site 20 (only on 6 December, thereafter this was part of the ROV Scan Area), and the Alli’s ridge measurement site 2 (only on 6 December, thereafter this was part of the ROV Scan Area).
ROV Scan Area
The ROV Scan Area (hereafter referred to as ‘ROV’, it was named for the Remotely Operated Vehicle installation) was generally off the stern and starboard side of Polarstern beyond the logistics area 2 and was composed of deformed second year ice, refrozen melt ponds, level first year ice, and first year ridges. We first measured the ROV Scan Area on 4 January and our final measurement was on 9 May. In early January, the modal ice thickness on both level first year ice 20 and level second year ice 23 was approximately 1 m. In March, ice dynamics displaced the first year ice region out of the ROV Scan Area and demolished the Ft. Ridge measurement site. These dynamics also produced young ice in a 40-m-wide, refrozen lead. The core region of ROV was connected to Snow2 from 6 December to 9 May. On two occasions—4 April and 9 May—we made measurements on Snow2 and ROV as part of the same project. ROV was co-located with the third Remotely Operated Vehicle site 2 , the NLoop snow and ice thickness transect 23 , the Ft. Ridge measurement site 2 (4 January to 18 March), half of the Alli’s Ridge measurement site 2 (4 January to 22 February), the Ridge Ranch mass balance site 20 (19 January to 11 March), the Stakes4 mass balance site 20 (4 January to 22 February), the David’s Ridge measurement site 2 (14 March to 9 May), the ROV3 broadband albedo transect 24 (11 April to 9 May), the SYI broadband albedo transect 24 (29 April to 9 May), and a number of vibrating wire ice stress gauges 2 .
Data Processing in RiSCAN
TLS data acquisition and initial post-processing steps were conducted in RiSCAN PRO (Riegl’s software for TLS data acquisition and processing: http://www.riegl.com/products/software-packages/riscan-pro/ ) following standard protocols described in the user manual. The following steps were conducted for each Project. First, an arbitrary SingleScan was chosen to be the origin of the Project. This SingleScan was levelled to establish the horizontal axis plane using the VZ1000’s onboard inclination sensors. Next, another SingleScan was registered to the first by computing the rigid transformation that minimizes the sum of the least-squares error in the positions of pairs of reflectors (a.k.a. ‘keypoints’) observed in both SingleScans. We repeated this process until all SingleScans in the Project are registered. Finally, we refined the registration of each SingleScan except for the origin with RiSCAN’s ‘Multi-Station Adjustment’. In this process, meter-scale planar facets were extracted from each SingleScan. Then, pairs of overlapping facets and pairs of reflectors between the SingleScans were all used as keypoints and an optimization procedure adjusted the rigid transformations of each SingleScan (except the origin) in order to minimize the sum of the least-squares error between all keypoints. Unlike in urban environments where walls, roads, and other human-made objects provide large planar facets, planar surfaces at MOSAiC were mostly meter-scale or smaller wind-scoured snow surfaces. When conducting Multi-Station Adjustment, we found that using a search radius (maximum distance between potential keypoints) between 0.3 and 0.7 m produced the best results. Search radii larger than this tended to match planar facets that were not truly co-planar (e.g., different faces of a snow dune), which causes misalignment.
The data for each Project was stored in a directory with the same name as the Project. All relevant parameters were exported from RiSCAN into open formats. The rigid transformation for each SingleScan is represented by a 4 × 4 matrix (using homogeneous coordinates 25 ) that is named the Scanner’s Own Position (SOP) matrix. The SOP matrices for each SingleScan were exported into tab-delimited. DAT files in the Project directory (e.g., ‘ScanPos001.DAT’). We gave each reflector a unique identifier (e.g., ‘r01’) that was consistent across all of the Projects. We exported the reflector positions (in the Project’s reference frame) for each Project into a comma-delimited file (named ‘tiepoints.csv’) in the Project directory. Finally, we exported the point cloud data itself for each SingleScan in LAS 1.4 format into a subdirectory named ‘lasfiles’. LAS 1.4 ( https://github.com/ASPRSorg/LAS ) is an open, community standard for point cloud data that is maintained by the American Society for Photogrammetry and Remote Sensing.
Data processing in pydar
Overview of pydar
The overall objective of pydar is to align SingleScans from different Projects in the same Scan Area into a lagrangian, ice-fixed reference frame such that one can observe topographic changes (e.g., snow deposition and erosion) over time. Furthermore, we sought to facilitate re-use of these data by preserving features of the data even when they are unimportant for our use case (e.g., cm-scale point density near the scanner, backscatter reflectance data, etc...) and enabling future researchers to revise our alignment of SingleScans (e.g., if they design a superior alignment procedure). This section provides a brief description of key features of pydar 19 and the steps taken to process the repeat TLS data from MOSAiC. Additional functionality and implementation details can be found in the code documentation. To achieve these goals, pydar has an object-oriented design that mimics the hierarchical structure of the TLS data. And, pydar distinguishes between the spatial relationships of TLS points measured from the same scan position (i.e., within a SingleScan) and the spatial relationships of TLS points measured from different scan positions (i.e., different SingleScans in a Project). pydar is implemented primarily in Python 26 , with substantial use of the Numpy 27 , Scipy 28 , and VTK 29 libraries. Some functionality is implemented in Cython 30 .
The core of pydar consists of four related classes: SingleScan, Project, ScanArea, and TiePointList (in this manuscript we use teletype font to reference classes and methods in code). SingleScan objects store the point cloud data for that SingleScan in the Scanner’s Own Coordinate System (as a vtkPolyData object: SingleScan.polydata_raw) and the rigid transformation to transform that point cloud into the desired reference frame (as a vtkTransform object: SingleScan.transform). Separating the spatial information for the point cloud of an entire Project into SingleScans enables us to adjust the spatial relationships between points from different SingleScans (as we do below) without altering the spatial relationships within each SingleScan. SingleScan also contains methods for filtering the point cloud data (e.g., FlakeOut 31 ) and reading and writing the data. When we filter TLS points, we set flags in the ‘Classification’ data field (following LAS 1.4 conventions: https://github.com/ASPRSorg/LAS ), rather than deleting points. A Project object contains the set of SingleScan objects for this Project (as a dictionary: Project.scan_dict) and an object representing the reflector positions (an instance of TiePointList). Project also contains methods for visualizing the data (e.g., Project.display_project), writing data output (e.g., Project.write_las_pdal), and converting the point cloud into a surface representation (e.g., Project.point_to_grid_average_image). Finally, a ScanArea object contains a set of Project objects for this Scan Area (as a dictionary: ScanArea.project_dict) and methods for aligning the SingleScans within those Projects (e.g., ScanArea.z_tilt_alignment_ss, see below).
Filtering of TLS data
Wind-blown snow particles were filtered using FlakeOut 31 with the standard parameters (z_max = 3, radial_precision = 0.005, z_std_mult = 3.5, and leafsize = 100) and assigned the classification flag ‘65’ (the LAS 1.4 standard prescribes that user-defined classifications be greater than 63). Additionally, we manually filtered the logistics area (classification flag ‘73’) from the Snow2 and ROV data because vehicle traffic there substantially disturbed the snow surface. For TLS data that has been processed by pydar, we store the processed data in a subdirectory of the Project directory named ‘npyfiles_archive’. Within the ‘npyfiles_archive’ subdirectories there are subdirectories for each SingleScan (e.g., ‘ScanPos001’). These subdirectories contain numpy 27 files for the point locations in the Scanner’s Own Coordinate System (‘Points.npy’) and each data attribute (e.g., ‘Reflectance.npy’, ‘Classification.npy’, etc...). We decided to store the data in this manner for three reasons. First, ‘.npy’ is a space-efficient, open-source format ( https://numpy.org/doc/stable/reference/generated/numpy.lib.format.html ) which is easy to open with widely-available tools. Second, separating the data attributes allows the user to only load the attributes they need into memory, which is useful because the ‘.las’ file for a SingleScan is approximately 600 MB. Third, reading ‘.npy’ files into memory in Python 26 is considerably faster than reading ‘.las’, which speeds up the overall workflow.
Alignment of TLS data
For our purposes of observing snow accumulation and redistribution, the greatest source of error is bias due to misalignment of SingleScans from measurements on different dates. The stochastic errors due to measurement uncertainty within a SingleScan are insignificant 15 . We developed a three step process to align SingleScans from one Project (Project_1) to another Project (Project_0): Coarsely align Project_1 to Project_0 by minimizing the least-squares error between their reflector positions. Align each SingleScan in Project_1 to the nearest SingleScan in Project_0 using local maxima as keypoints, which reduces tilt biases (a tilt bias of 0.0001 radian creates a vertical error of 0.01 m at 100 m distance from the scanner). Perform a fine-scale vertical alignment for each SingleScan in Project_1 by minimizing modal vertical differences between it and Project_0.
For reflector alignment, we labeled each reflector with a consistent name (e.g., ‘r01’) in RiSCAN. Ice deformation and errors in the VZ1000’s reflector search process can shift a reflector relative to the other reflectors. We manually compared the pairwise distances between reflectors and used only the set of reflectors whose pairwise distances between Projects changed by 0.02 m or less. This aligned the scans horizontally to within 0.02 m. Typically, this set comprised 4 to 8 reflectors. With this set of reflectors as keypoints, we computed the rigid transformation that minimized the least-squares error between the keypoints 32 (implemented in TiePointList.calc_transformation). The default version of this rigid transformation calculation (mode = 'LS' in TiePointList.calc_transformation) requires at least 3 pairs of keypoints and has six degrees of freedom: translations in the 3 spatial directions and rotations around the 3 unit vectors (roll, pitch, and yaw). However, sometimes when there were just 3 or 4 pairs of keypoints, small vertical errors in the reflector positions produced unrealistic tilts, assessed manually by looking at the vertical differences between the aligned Projects. For these cases and when there were only 2 pairs of keypoints, we calculated the rigid transformation without permitting tilt changes (mode = 'Yaw' in TiePointList.calc_transformation). Finally, in two, cases ice dynamics caused there to be no reflectors whose pairwise distance changed by less than 0.02 m. In these cases, we used a single reflector to determine the translational components of the rigid transformation and manually adjusted the yaw component such that flag posts frozen into the ice (see Fig. 1 for example) aligned to within 0.02 m at their bases. Manual inspection of the results indicated that reflector alignment brings vertical biases within 0.05 m and tilt biases within 0.001 radian.
Local maxima alignment for each SingleScan followed the same process as reflector alignment, except that it used local maxima as keypoints instead of reflectors. Local maxima are mostly the crests of ridges, hummocks, or human installations (e.g., poles) and are unlikely to erode or accumulate snow. A pair of local maxima from the SingleScans was required to be within a set spatial tolerance of each other to be used as keypoints. We defined the tolerance in cylindrical coordinates centered on the scanner’s location of the SingleScan that was being aligned. We used a tolerance of 0.0008 radian yaw, 0.001 radian tilt, and 0.1 m radial difference. Local maxima were located on 5 × 5 m regions (changing the region size to 2 × 2 m or 10 × 10 m had no discernable impact on the results). These settings produced several hundred keypoints for each pair of SingleScans from the different Projects. The tilt biases after local maxima alignment were less than 0.0001 radian (i.e., a 1 cm vertical offset across a 100 m distance), determined by manual inspection. Local maxima alignment is implemented in ScanArea.max_alignment_ss.
Finally, for the fine-scale vertical alignment, we exploited the fact that numerous field observations at MOSAiC suggested that a plurality of the snow surface did not change on weekly, or even monthly, time-frames. These observations included: snowmobile tracks did not typically get covered by snow except for isolated, distinct snow drifts. The indentations made by the feet of the TLS tripod were often visible when we revisited scan position locations. The circular mark left in the snow by the atmospheric flux chamber measurement could be identified months after the measurement was made. Cm-scale micro-relief on the snow surface observed near the TLS scanner (where point density is very high) appears consistent between scans, unless a snow dune happened to form in that location. Certain distinctive snow features (e.g., barchan dunes) remained unchanged for months. If the plurality of the snow surface does not change between Projects, then the modal vertical difference must be zero. For each SingleScan, we computed the distribution of vertical differences between it and the Project we were aligning it to. We used a raster with 1 m horizontal resolution created by averaging the z-components of the TLS points within each grid cell 16 . We used only grid cells with at least 25 points per square meter in the SingleScan and the Project it was being aligned to. The vertical component of the transformation for the SingleScan was set such that the modal difference is zero. Fine-scale vertical alignment is implemented in ScanArea.z_alignment. Manual inspection of the results indicated that vertical biases were reduced to within about 0.01 m (see below for Technical Validation).
In Snow1, deformation occurred frequently and throughout the Scan Area, such that there was no core ice floe with multiple reflectors on it that did not experience ice deformation for an extended period of time (as there was for Snow2 and ROV). Alignment steps 2 and 3 are predicated on the ice floe itself remaining the same, and hence could not be applied in this dynamic environment. We include Snow1 for completeness and for use by future researchers (e.g., these ice dynamics would not prevent another researcher from using the Snow1 data to compute aerodynamic drag coefficients). However, we did not validate its vertical alignment to cm-scale accuracy nor do we recommend that the data be used for snow accumulation without further work quantifying ice deformation (which is beyond the scope of this data processing).
For convenience and to avoid needing to recompute the transformations, the rigid transformation aligning each SingleScan has been written out to a ‘.npy’ file that can be loaded directly in pydar. In the Project’s directory, there is a subdirectory named ‘transforms’, within which there are subdirectories for each SingleScan (e.g., ‘ScanPos001’). The transformation is in this subdirectory and is named ‘current_transform.npy’. Finally, because the ROV and Snow2 Scan Areas were connected, we decided to place them within the same lagrangian, ice-fixed reference frame (for which the origin happens to be near the Remotely Operated Vehicle tent 2 ).
Surface Reconstruction from Point Clouds
Many applications of topographical data—including measurement of snow accumulation—require surfaces or gridded data rather than point clouds (the format of TLS data). To produce gridded surfaces, we used gaussian process regression 33 . Also known as kriging, gaussian process regression is an interpolation technique that provides the best linear unbiased estimate (minimizes least-squares error) of a parameter (e.g., surface height) at unsampled locations (e.g., a regular grid of points) given nearby measurements (e.g., TLS points) and the covariance function (also referred to as the kernel) 33 , 34 . It has previously been applied to TLS data on Arctic sea ice 35 . The vertical uncertainty in an individual TLS data point increases with distance from the scanner due to the divergence of the laser beam (0.3 mrad for the VZ1000) and is represented as gaussian noise 15 . Because the TLS collects useful data up to 200 m from the scanner, the vertical uncertainties in individual data points varies by an order of magnitude. One advantage of gaussian process regression, is that it can factor in the vertical uncertainty of each point when interpolating.
We chose to use an exponential covariance function because it is the simplest covariance function that can represent continuous, non-differentiable (i.e., not-smooth) surfaces 34 . We chose this because wind-driven spatially variable snow deposition and erosion produce rough snow bedforms on horizontal scales of 10 cm to several meters 36 . The exponential covariance function contains two hyperparameters: the ‘range’, defined as the distance at which the correlation between two points is less than 5%; and the ‘sill’, defined as the variance between two uncorrelated points (i.e., points further apart than the range). We can estimate appropriate values of the hyperparameters from the data itself, by optimizing the marginal likelihood of the gaussian process 34 . On the scale of our scan areas (several hundred meters across) the snow and ice topography varies from rough areas of pressure ridges and rubble to smooth areas on level ice. To account for this spatial variability, we divided the domain into a grid of non-overlapping 1.2 m by 1.2 m subdomains. For each subdomain we estimated the sill from the variance of TLS points within a 5 m radius of the center of the subdomain and the range via marginal likelihood optimization using GPyTorch 37 with a KeOps 38 kernel on an NVIDIA Quaddro P2000 GPU. The size of the subdomains was chosen such that they were significantly smaller than the spatial scales of the pressure ridges (which were at least 10 s of meters) and balancing GPU memory limitations with computational time. After estimating the hyperparameters, we interpolated the gaussian processes on a regular grid with 10 cm spacing. This surface reconstruction process is implemented in Project.merged_points_to_image. | Snow and ice topography impact and are impacted by fluxes of mass, energy, and momentum in Arctic sea ice. We measured the topography on approximately a 0.5 km 2 drifting parcel of Arctic sea ice on 42 separate days from 18 October 2019 to 9 May 2020 via Terrestrial Laser Scanning (TLS). These data are aligned into an ice-fixed, lagrangian reference frame such that topographic changes (e.g., snow accumulation) can be observed for time periods of up to six months. Using in-situ measurements, we have validated the vertical accuracy of the alignment to ± 0.011 m. This data collection and processing workflow is the culmination of several prior measurement campaigns and may be generally applied for repeat TLS measurements on drifting sea ice. We present a description of the data, a software package written to process and align these data, and the philosophy of the data processing. These data can be used to investigate snow accumulation and redistribution, ice dynamics, surface roughness, and they can provide valuable context for co-located measurements.
Subject terms | Data Records
Repeat TLS data in Table 1 are available at the Arctic Data Center 21 . The top level of the archive contains a directory for each Scan Area: Snow1, ROV (which includes the Snow2 projects since they are in the same reference frame), RS (April Remote Sensing site), and MET (single Project focused on Met City on 4 May). Within these directories is a subdirectory for each Project that contains all data records for that Project (as described in the Methods section). An illustrative directory tree for a Scan Area is shown below.
Technical Validation
Alignment Validation
We qualitatively assessed the alignment results by examining the patterns of snow accumulation and redistribution in comparison with the locations of the scan positions. In general, after the full alignment process we did not find artifacts due to either the distance from the nearest scan position or on regions observed from different scan positions. In contrast such artifacts were readily-apparent when conducting only reflector alignment, indicating that local maxima and fine-scale vertical alignment steps improved the alignment for observing snow processes. To quantitatively assess the uncertainty in our alignment procedure, we used a Bayesian statistical model to compare changes in snow thickness measured by TLS with manual measurements of snow thickness changes at snow thickness stakes in the Ridge Ranch mass balance site 39 , while accounting for the uncertainties in the individual measurements. The Ridge Ranch mass balance site was located on level, first-year ice within the ROV Scan Area from 19 January to 18 March (at which point ice deformation relocated Ridge Ranch outside of the TLS measurement area). Ridge Ranch included nine snow thickness stakes, arranged in a cross, with approximately eight meters between stakes. Each stake was frozen into the ice and had a metric length scale marked on its side. Snow thickness was measured by manually observing the location of the snow surface on this metric scale with an accuracy of 0.01 m. Changes in snow thickness at each stake can be determined by comparing repeat measurements. Because the stake was permanently frozen into the ice, these measurements directly recorded changes in the snow surface height. They are unaffected by the changes in snow or snow-ice interface properties that may change the penetration depth of a snow thickness probe 40 . Manual measurements at Ridge Ranch were made on 28 January, 5 February, and 7 March. ROV TLS measurements were made on 25 January, 4 February, and 11 March (Table 1 ). Observers in the field did not observe snow accumulation or drifting snow at Ridge Ranch during 25–28 January, 4–5 February, and 7–11 March. From these three TLS measurements and three manual stakes measurements, we defined two evaluation periods (Table 3 ) in which the TLS and the manual measurements should observe the same change in the surface at each stake, if there were no measurement noise or bias due scan misalignment.
We define S k,i as the change in the snow surface observed by manual stake measurements at stake i over a evaluation period k . Mathematically, S k,i is the true change in the snow surface r k,i plus noise due to measurement error n k,i (Eq. 1 ). We consider the measurement accuracy of an individual stake reading (0.01 m) to represent two standard deviations of the measurement noise. This implies that the standard deviation of the measurement noise for a single stake reading is 0.005 m. Thus, we represent n k,i as an instance of a zero-mean, normally distributed random variable with a variance . Note that each S k,i is the difference of two independent measurements, hence the multiplication by two in the variance. We define t k,i as the change in the snow surface observed by TLS for each stake and evaluation period. t k,i is the true change r k,i plus measurement noise m k,i minus a constant bias for the evaluation period b k due to scan misalignment (Eq. 3 ). To quantify the change observed in the TLS data at a stake, we looked at the mean vertical distance for all exclusive pairs of horizontally closest points within 10 cm of the stake (excluding the stake itself) in the scans at the beginning and end of the evaluation period. The measurement noise m k,i is represented as an instance of zero-mean, normally distributed random variable with a variance ( ) determined by the vertical uncertainty in each TLS point due to the laser beam spreading with distance 15 .
To quantify how misaligned our TLS measurements may be, we computed the posterior distribution of the bias, b k , given our measurements at each stake. We define y k,i as the difference between the stake measurement s k,i and the TLS measurement t k,i for each stake and evaluation period (Eq. 5 ). y k,i is equal to the bias for the evaluation period b k plus the difference in the measurement noise for each measurement, which we denote by g k,i . The measurement noise for the TLS and stake measurements are independent and normally distributed, allowing us to represent their difference as a zero-mean, normally distributed random variable whose variance is the sum of the variance of each measurement noise. Thus, each difference y k,i is an instance of a normally distributed random variable given by:
Applying Bayes Rule, the posterior probability density of b k given the data for an evaluation period is proportional to the prior probability density multiplied by the likelihood (Eq. 8 ). y k denotes the set of all observations for evaluation period k : . Equation 5 indicates that the likelihood of a single observation is gaussian. Conditioned on the bias, the observations are independent. Hence the likelihood of all observations is their product (Eq. 9 ). Finally, we chose to represent our prior for the bias as a normal distribution with a mean of zero (we do not expect there to be any bias) and a variance of . This variance was chosen because a bias of greater than 0.04 m (twice the standard deviation of our prior) would be obvious on manual inspection of the data, and was not observed. Moreover, the variance of the uncertainty in the observations is lower than the variance of this prior. So increasing the variance of the prior has little impact on the results (i.e., the information content of the observations is considerably higher than this prior). This yields the following expression for the posterior probability density:
Algebraic simplification of Eq. 10 yields that the posterior density is a gaussian (Eq. 11 ) whose mean is the mean of the prior and the observations, each weighted by the inverse of their variance (Eq. 12 ). The variance of is determined from the sum of the inverse variances of the prior and the observations (Eq. 13 ).
We use the posterior density of the bias to establish a 95% credible interval (the interval between the 2.5th and 97.5th percentiles in the distribution of a parameter) for the bias in our alignment procedure.
To validate the TLS alignment, we compared the snow surface change observed by TLS and manual measurements (Fig. 3 ) at the Ridge Ranch mass balance site 41 during evaluation periods 1 and 2 (Table 3 ). Each evaluation period included at least one snow accumulation and redistribution event. The evaluation periods collectively span 25 January to 11 March. Manual measurements indicated that most stakes experienced little to no change in the snow surface in either evaluation period. The largest snow accumulation observed manually at any of the stakes occurred during evaluation period 2, when 0.07 m of snow accumulated at two stakes (Fig. 3 ). The TLS measurements, too, observed approximately 0.07 m of change at the same two stakes and little to no change at the others (Fig. 3 ). With these data, we compute the posterior density of the bias (Table 4 ) following Eqs. 1 – 12 . The estimated mean biases for evaluation period 1 and 2 are −0.001 m and −0.004 m, respectively. And the minimum and maximum 95% credible interval bounds are −0.011 m and 0.007 m respectively. Thus, we conclude that the bias due to scan misalignment is less than 0.011 m. We stress that these numerical values are particular to the ROV and Snow2 scan areas at MOSAiC (regular ice deformation in the Snow1 area will require future work to correct for). Future TLS measurement campaigns should conduct in-situ validation measurements for their specific measurement sites.
Surface Reconstruction Validation
We validated our gaussian process regression approach to surface reconstruction by examining the differences between the vertical components of TLS data points and the reconstructed surface. Figure 4 shows an example of the distribution of these differences for a 250 m × 65 m region reconstructed on a 10 cm grid containing a large ridge and level ice. The mean difference is 2.5 × 10 −5 m, the median difference is 6.5 × 10 −5 m, the standard deviation of the differences is 0.0054 m and the median absolute deviation, a metric of the variability of data that is robust to extreme values 42 , for these data is 0.0017 m. This is similar in magnitude to the median standard deviation of the vertical uncertainty due to laser beam divergence: 0.0026 m. These results combined with manual inspection of the differences suggest that almost all of the differences between the TLS points and the reconstructed surface can be attributed to the divergence of the laser beam with distance from the scanner. A small fraction (less than 5%) of the differences are due to areas with high surface roughness on horizontal length scales of less than the 10 cm grid spacing. These rough areas cause the tails of the distribution of differences to be more extreme than a normal distribution (Fig. 4 ). For the purpose of assessing snow accumulation, the uncertainties in our surface reconstruction approach are insignificant compared to the alignment uncertainties (0.011 m, see above). However, we caution that applications involving surface roughness may want to further develop surface reconstruction techniques.
Usage Notes
On 18 March, the ice was deforming within the ROV Scan Area while we were collecting TLS data. On 8 April, Snow1 split in two while we were collecting TLS data. Data collected after the deformation began are in a second project (‘mosaic_01_080420.RiSCAN’). We recommend caution when using data collected during deformation events. The convention in project names was inadvertently switched from ‘MMDDYY’ to ‘DDMMYY’ at the turn of the year (except for ‘mosaic_01b_061219.RiSCAN.RiSCAN.RiSCAN’ on 6 December). A function is provided in pydar to convert a Project’s name to its date (pydar.mosaic_date_parser). Sometimes extra ‘.RiSCAN’s were included in the Project name, these have no significance.
Researchers interested in extending the functionality of pydar (e.g., adding feature tracking functionality for ice deformation, point cloud segmentation, etc) are encouraged to contact the corresponding author in case related efforts are underway. We also welcome discussions on potential uses of these data and collaborations with other data products. | Acknowledgements
Data used in this manuscript were produced as part of the international MOSAiC project with the tag MOSAiC20192020 and the Project_ID: AWI_PS122_00. We thank all people involved in the expedition of the research vessel Polarstern 1 during MOSAiC in 2019–2020 as listed in the MOSAiC Extended Acknowledgement 43 . Thank you to Jon Holmgren for fabricating the heater cover for the TLS. Thank you to Eric Brossier, Steven Fons, Jesper Hansen, Thomas Olufson, Martin Radenz, Carl Martin Schonning, Saga Svavarsdóttir and Monica Votvik for their capable assistance in the field. D.C.S. was supported by NSF OPP-1724540, NSF OPP-1724424 and NSF OPP-2138788. I.A.R., and D.P. were supported by NSF OPP-1724540, NSF OPP-1724424 and NSF OPP-2138785. C.P. was supported by NSF OPP-1724540 and NSF OPP-2138785. MO was supported by NSF OPP-1735862. A.R.M. was supported by WSL_201812N1678 and SPI DIRCR-2018-003. I.S.O.M. supported by BMBF Grant 03F0810A, jointly funded by BMBF (DE) and NERC (UK). D.N.W. was supported by SNSF-200020_179130 and SPI DIRCR-2018-003. M.J. was supported by EU-Arice DEARice. P.I. was supported by NSF #1820927 and RCN #287871.
Author contributions
The concept of TLS snow redistribution and accumulation measurements on MOSAiC was conceived, facilitated, and supervised by C.P. D.C.-S. led the data collection from March to May 2020, developed the data processing methods, and wrote the initial draft. I.A.R. led the data collection from October 2019 to February 2020. M.P. provided guidance and advice on the technical validation and surface reconstruction. C.P. designed the heater cover for the TLS. D.P. supervised this project. P.I., M.J., A.J., A.R.M., I.M., M.O., D.W., and R.V. contributed to data acquisition. All authors reviewed the manuscript.
Code availability
pydar is available at Zenodo 19 ( https://zenodo.org/record/8120858 ).
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:58 | Sci Data. 2024 Jan 13; 11:70 | oa_package/2b/ae/PMC10787767.tar.gz |
|||
PMC10787768 | 38218967 | Introduction
Skin provides convenient access for delivering most biotherapeutics and vaccines, typically via a hypodermic needle. However, ensuring patient compliance with long-term, repetitive pharmaceutical treatments remains a significant challenge. Hypodermic injection provides a low-cost and rapid method of drug delivery, but concerns about safe disposal, potential transmission of bloodborne pathogens, and the need for trained personnel hinder its widespread implementation. Recently, advancements in transdermal drug delivery strategies, such as sonophoresis 1 , 2 , iontophoresis 3 , 4 , electroporation 5 , 6 , photomechanical waves 7 , 8 , heat 9 , 10 , microneedles (MN), and others 11 , have improved drug permeation through skin, providing safe and painless operations 12 with easy-to-use procedures 13 , 14 . However, the lack of an automated mechanism for active, precise, and coordinated drug administration over extended periods hinders their applicability for chronic pharmaceutical management. This problem becomes particularly critical for chronic diseases 15 , 16 , including diabetes 17 , hyperlipidemia 18 , asthma 19 , depression 20 , and others, where repetitive drug administrations are required, and a dynamically personalized delivery schedule could improve drug efficacy and decrease drug toxicity 21 . However, most existing drug delivery devices have limited capability to automate delivery digitally, especially outside hospital settings and in a comfortable, long-lasting fashion 22 .
Microneedles have shown great promise in facilitating the delivery of various types of drugs, including small molecules 23 , 24 , peptides 25 , nucleic acids 26 , 27 , and nano composites 28 – 33 . Furthermore, modulating the structural integration and chemical functionalization of MNs enables a broad range of release profiles. For example, MNs with a core-shell structure that hosts a drug reservoir inside each needle can exhibit a pre-programmed, multi-step release profile with tunability via designing the degradation time of MN shell layers 34 . This method also allows the integration of multiple drugs to enable sequential release as a combined therapy 35 . However, the complexity involved in fabrication poses challenges for manufacturing scalability, and once deployed, it becomes difficult to modify the pre-programmed release time of the microneedles. Chemical functionalization on MNs provides a solution to introduce self-sensing and self-responsiveness capabilities. For insulin delivery, researchers use specific chemical groups to functionalize the material of MNs, such as phenylboronic acid or aminoimidazole 25 , 36 , which react with glucose in body biofluids and induce structural changes in the polymer network of MNs 32 . This triggers the release of embedded drugs in response to glucose levels in the surrounding environment, enabling convenient and self-regulated long-term drug release to control chronic diseases. However, the complexity of synthesis and the reliance on local microenvironments rather than global body physiology limit its practical applicability.
In addition to passive drug release, the potential of active control to deliver drugs on demand through external triggers has paved the way for closed-loop therapeutic systems when integrated with associated health monitors, enhancing treatment precision and dynamics. The heat-triggered release represents a typical example where MNs with drug loaded inside a thermally responsive material (e.g., Expancel microspheres and paraffin C 23 37 , 38 , which exhibit significant expansion in pore or volumetric size upon heating at, typically, 47 °C, to secrete the drug accordingly. Mechanisms such as these allow drug release to be controlled with thermal triggers via an electrical heater or optical illumination 39 , 40 . However, the inability to effectively focus thermal energy in a confined space of biological tissue precludes its delivery precision and safety. Thin membranes based on biocompatible metals serving as a gate for drug reservoirs can enable actively controlled drug delivery 41 , 42 . The metallic gates can disintegrate or dissolve upon an electrical trigger 43 . Demonstrated devices exploit various biocompatible metals, including Mg (~30 μm), Mo (~10 μm), and Au (~300 nm), as the metal gates to form electronic implants that enable on-demand drug delivery 44 , 45 . Opening of the metallic gates by either anodic oxidation (typically for Mg and Mo) or corrosion-induced crevices (typically for gold) via electrical triggers can effectively initiate the active release behavior. The compatible integration via microfabrication technologies enables such drug-releasing mechanisms with high spatiotemporal controllability via small electrical signals (typically, +1.04 V vs. SCE) 43 , 46 , which could potentially realize pharmaceutical automation in space and time.
Here, we present a skin-interfaced drug delivery system that utilizes electrically triggered, gated MNs to realize on-demand drug delivery with high spatiotemporal controllability. The drug delivery system, named spatiotemporal on-demand patch (SOP), uses a thin gold layer (~150 nm) coated onto MNs to enable drug encapsulation and protection at the standby stage. Small electrical triggers (~2.5 V, DC) for 30 s effectively disintegrate the gold coating, which exposes the drug to initiate delivery. Microfabrication processes enable circuitry designs of the gold layer to realize release triggering of individual MNs or subsections through a wireless communication module (e.g., Near-field communication, Bluetooth Low Energy). Direct deposition of the gold layer onto MNs overcomes limitations in the fabrication complexity and device robustness associated with the reservoir systems with free-standing metallic gates, as reported previously in implantable devices 43 . Ultrafine spatial control (<1 mm 2 ) of drug release of single MN, active management with high temporal (less than 30 s) resolution of drug release, wireless operation, and comfort wearability highlight the enabling capabilities of the SOP. Both benchtop experiments using a fluorescent dye and in vivo study through intracranial injection demonstrate the high potential of SOP as a general, fully wireless, wearable platform for personalized, chronic drug delivery to improve pharmaceutical efficacy and user adherence. Moreover, in vivo demonstration via intracranial injection of SOP reveals its potential utility for neural therapy and modulation. The high spatiotemporal resolution and SOP’s on-demand drug release feature make it an advanced tool for brain research with model animals, especially in studying neural circuits mapping and cause-and-effect relationships between neural activity and behavior 47 . Capabilities offered by SOP to deliver therapeutic agents sequentially and proactively to specific brain regions associated with neurological disorders may deepen our understanding of the underlying mechanisms of their pathologies. This insight can lead to the development of more targeted treatments for disorders like Parkinson’s disease, epilepsy, depression, and Alzheimer’s disease 48 – 52 . | Methods
The sample size used in this study is based on the expected variations between animals and is comparable to many previous reports using similar techniques (cited in the corresponding sections). Sample size of each experiments can be found in the Figure legends. Data from animals were excluded based on histological criteria that included injection sites, virus expression and optical fiber placement. Only animals with injection sites/virus expression/optical fiber placement in the region of interest were included, based on our previous reports. The experiments were not randomized. Animal were allocated into experimental groups by matched gender, age, weight, etc. Investigators were blinded to the experimental groups until all data had been collected and analyzed.
C57BL/6 mice (8–10 weeks, females) were used for all animal experiments. Animals were group-housed at constant temperature (22–24 °C) and humidity (40–60%), and bred in a dedicated husbandry facility with 12/12-h light-dark cycles with food and water ad libitum and under veterinary supervision. Animals subjected to surgical procedures were moved to a satellite housing facility for recovery with the same light-dark cycle. All procedures were conducted in accordance with the NIH Guide for the Care and Use of Laboratory Animals and with the approval of the Institutional Animal Care and Use Committee at the University of North Carolina at Chapel Hill (UNC), under the protocol # 22–146 84 .
Fabrication of the SOP
Normal MN patch
The general procedure of MN fabrication was demonstrated in Supplementary Fig. 2 . First, 50 g of PDMS (Sylgard 184, Dow Corning) was fully cured in a glass petri dish at 60 °C for 2 h with 5 g of its corresponding curing agent. Then, a negative MN mold was patterned on the cured PDMS by a UV laser ablation system (SFX-5UV, Luoyang Xincheng Precision Machinery). The MN molds were fabricated with different depths from 0.5 to 3.5 mm, a base diameter of around 0.25 mm, and an inter-needle spacing of 1 mm. The depth of the MN mold can be controlled by tuning the loops and power of UV laser ablation. The UV ablation was followed by acetone sonication for at least 5 min to clean up the surface of the PDMS negative mold. Then, a PLGA (PLGA, Mw = 50 ~ 75 kDa, ester terminated, Sigma Aldrich) solution (10 wt% in acetone, VWR) was drop cast on the PDMS negative mold in the petri dish. The PLGA-covered mold was heated at 45 °C and 60–160 mmHg for around 2 min to let the PLGA solution fill in the mold and evaporate. The entire PDMS mold was capped by another petri dish to slow down the evaporation of acetone. The evaporation process was followed by a refill of PLGA solution. The evaporation-refilling cycle was conducted 10–20 times to provide enough PLGA for the MN patch, with a thickness from 0.6 to 1.2 mm. After that, the PLGA-covered mold was kept in the oven at 45 °C and 1 atm for at least 8 h to dry the surface. The mold was then frozen at −20 °C for at least 30 min to harden the PLGA patch, which was subsequently extracted from the mold. The free-standing PLGA patch was allowed to further dry up on both sides at 45 °C and 1 atm for another 24 h, then trimmed by UV laser ablation. The hardened and dry PLGA patch was eventually deposited with a layer of gold (usually 150 nm in thickness) by sputter coating (PVD 75 sputterer, Kurt J. Lesker). The Gold traces were patterned by an IR laser ablation system (SFX-50GS, Luoyang Xincheng Precision Machinery).
Melatonin-loaded MN
A melatonin loaded PLGA solution was prepared in advance. The melatonin (Sigma Aldrich) was mixed with PLGA (Sigma Aldrich) at a ratio by weight from 1:10 to 1:2. Then, the mixture was dissolved in acetone (VWR) at a 1:10 ratio by weight. The solution is stored in refrigerator at around 4 °C no longer than 12 h, and wrapped with aluminum foil to avoid light.
First, 50 g of PDMS (Sylgard 184, Dow Corning) was fully cured in a glass petri dish at 60 °C for 2 h with 5 g of its corresponding curing agent. Then, a negative MN mold was patterned on the cured PDMS by a UV laser ablation system (SFX-5UV, Luoyang Xincheng Precision Machinery). The MN molds were fabricated with a depth around 3 mm, a base diameter of around 0.35–0.5 mm, and an inter-needle spacing of at least 5 mm. The depth of the MN mold can be controlled by tuning the loops and power of UV laser ablation. The UV ablation was followed by acetone sonication for at least 5 min to clean up the surface of the PDMS negative mold. Then, the melatonin-PLGA solution (10 wt% in acetone) was drop cast on the PDMS negative mold in the petri dish. The PLGA-covered mold was heated at 30 °C and 60–160 mmHg for around 2 min to let the PLGA solution fill in the mold and evaporate. The entire PDMS mold was capped by another petri dish to slow down the evaporation of acetone. The evaporation process was followed by a refill of PLGA solution. The evaporation-refilling cycle was conducted 10–20 times to provide enough PLGA for the MN patch, with a thickness from 0.6 to 1.2 mm. After that, the PLGA-covered mold was kept in the oven at 30 °C and 1 atm for at least 8 h to dry the surface. The mold was then frozen at −20 °C for at least 30 min to harden the PLGA patch, which was subsequently extracted from the mold. The free-standing PLGA patch was allowed to further dry up on both sides at 30 °C and 1 atm for another 24–48 h, then trimmed by UV laser ablation. The hardened and dry PLGA patch was eventually deposited with a layer of gold (usually 150–200 nm in thickness) by sputter coating (PVD 75 sputterer, Kurt J. Lesker). The Gold traces were patterned by an IR laser ablation system (SFX-50GS, Luoyang Xincheng Precision Machinery).
Rhodamine B-loaded MN
A Rhodamine B (Thermo scientific) loaded PLGA solution was prepared in advance. Rhodamine B was dissolved in the acetone (VWR) solution of PLGA (usually at 1/300 ratio by weight, Sigma Aldrich).
First, 50 g of PDMS (Sylgard 184, Dow Corning) was fully cured in a glass petri dish at 60 °C for 2 h with 5 g of its corresponding curing agent. Then, a negative MN mold was patterned on the cured PDMS by a UV laser ablation system (SFX-5UV, Luoyang Xincheng Precision Machinery). The MN molds were fabricated with different depths from 0.5 to 3.5 mm, a base diameter of around 0.25 mm, and an inter-needle spacing of 1 mm. The depth of the MN mold can be controlled by tuning the loops and power of UV laser ablation. The UV ablation was followed by acetone sonication for at least 5 min to clean up the surface of the PDMS negative mold. Then, the Rhodamine B-PLGA solution (10 wt% in acetone) was drop cast on the PDMS negative mold in the petri dish. The PLGA-covered mold was heated at 45 °C and 60–160 mmHg for around 2 min to let the PLGA solution fill in the mold and evaporate. The entire PDMS mold was capped by another petri dish to slow down the evaporation of acetone. The evaporation process was followed by a refill of PLGA solution. The evaporation-refilling cycle was conducted 10–20 times to provide enough PLGA for the MN patch, with a thickness from 0.6 to 1.2 mm. After that, the PLGA-covered mold was kept in the oven at 45 °C and 1 atm for at least 8 h to dry the surface. The mold was then frozen at −20 °C for at least 30 min to harden the PLGA patch, which was subsequently extracted from the mold. The free-standing PLGA patch was allowed to further dry up on both sides at 45 °C and 1 atm for another 24 h, then trimmed by UV laser ablation. The hardened and dry PLGA patch was eventually deposited with a layer of gold (usually 150 nm in thickness) by sputter coating (PVD 75 sputterer, Kurt J. Lesker). The Gold traces were patterned by an IR laser ablation system (SFX-50GS, Luoyang Xincheng Precision Machinery). For controlled dye release model, a layer of PDMS (1:10, ~ 15 μm) was carefully, manually coated and cured on specific positions that should be covered.
Wireless on-demand patch
A commercially available Pyralux Kapton soft PCB material made of polyimide sandwiched between copper was sprayed on and coated with masking paint (Krylon). The paint mask was then partially removed using an infrared laser cutter to expose unwanted copper, which was removed by etching in ferric chloride solution (MG Chemicals 415) for 15 min. The soft PCB was then rinsed with water and acetone to remove the remaining etchant and paint mask. Surface mount electrical components, including diodes and power regulators, were soldered using solder paste and hot air guns. An MN array (1.2-mm, 150 nm gold coated) was integrated into the wireless patch using PI-based adhesives. A small jumper wire was attached to the MN array using silver-based conductive adhesives to ensure conductivity. Another MN array was integrated into the wireless patch in the same way to serve as the counter electrode of crevice corrosion.
Kinetic characterization of electrochemical corrosion
An MN patch coated with gold was connected to a piece of graphene tape. The peripheral area of the MN patch (except for needles) and the graphene tape (5113SFT, 3 M) were protected by PDMS from corrosion, with an opening area of 0.5 * 0.5 cm 2 . An amperemeter (NI-USB 4065, National Instruments) was applied to monitor the current density versus time. The experiment was carried out in a two-electrode system, where the counter electrode was graphene tape. A power source (SPD3303X-E, Siglent) was used to provide a constant voltage between the cathode and anode, while the anode was connected to the gold-coated MN patch to provide oxidative potential. The MN patch with graphene tape was coated with PDMS (~10 μm in thickness) to protect the exposed surface except for the MN regions. The electrochemical corrosion was carried out in a standard environment (1X DPBS, Cornings) to mimic the body fluid. I-t curves were obtained at 2.2, 2.4, 2.6, 2.8, 3.0 V.
Kinetic characterizations were also carried out for Mo-coated MN arrays in the same way (Supplementary Fig. 12 ).
Study of dye release
Free release without encapsulation
Rhodamine B (Thermo scientific, mass ratio versus PLGA 1:300) was mixed with PLGA (Sigma Aldrich) and dissolved in an acetone (VWR) solution. The fabricated MN patch was immersed in a petri dish containing 20 mL of deionized water (HAVENLAB) at 45 °C constant temperature. Samples were taken for UV-Vis spectrometry since the MN patch is immersed every 1 min from 0 to 10 min, every 5 min from 10 to 30 min, and at 60 min. The UV-Vis absorbance was characterized by a UV-Vis spectrophotometer (VWR-10037, VWR) from 800 to 300 nm, with an interval of 1 nm. The samples were returned to the petri dish immediately after characterization to maintain a constant volume. The absorption data was obtained by VWR UV software and visualized by Origin Pro 2022.
Encapsulated release with encapsulation
A Rhodamine B-loaded MN patch was deposited with a 100-nm gold layer on the side with needles to encapsulate the PLGA and dye. The back side of the MN patch was fixed and encapsulated into PDMS to prevent exposure to water. The patch was immersed in a petri dish with 20 mL of 1X DPBS (Cornings) at 45 °C. The UV-Vis absorbance was characterized by a UV-Vis spectrophotometer (VWR-10037, VWR) from 800 to 300 nm, with an interval of 1 nm. The samples were returned to the petri dish immediately after characterization to maintain a constant volume. The absorption data was obtained by VWR UV software and visualized by Origin Pro 2022.
On-demand stepwise release
A 150-nm gold layer was deposited to a Rhodamine B-loaded MN patch by sputter coating on the side with needles, then patterned by IR laser ablation to generate gold traces. The gold electrodes were connected by silver paste (8331D, MG Chemicals) to the constant voltage power source. Then, the entire PLGA patch was encapsulated with PDMS except for the needle region. The patch was immersed in a petri dish with 20 mL of DI water at 60 °C. A 2.5 V constant voltage was applied to trigger the electrochemical corrosion of gold within 20 s. The dye release of the four MN arrays was subsequently triggered every 30 min. The UV-Vis absorbance was characterized by a UV-Vis spectrophotometer (VWR-10037, VWR) from 800 to 300 nm, with an interval of 1 nm. Samples were taken at certain intervals for UV-Vis absorbance characterization and returned to keep the volume constant. The absorption data was obtained by VWR UV software and visualized by Origin Pro 2022.
Surface profilometry of corrosion
A silicon wafer ({100} facet, UniversityWafer) was deposited with a 150-nm gold layer by sputtering and divided into 4 cm 2 squares. The gold-coated side of the wafer was connected to the DC power source by graphene tape. The peripheral area of the wafer square was encapsulated by PDMS with a 1 cm 2 window exposed in the center, as illustrated in Supplementary Fig. 7 . The wafer square was immersed in 10 mL 1X DPBS (Cornings), and the electrochemical corrosion was triggered by a 2.5 V constant voltage. Optical images of the wafer square were captured by a microscope (S9i, Leica) starting from the beginning. The time interval of images was 0.5 min from 0 to 5 min and 1 min from 5 to 9 min. The obtained images were cropped to leave only the exposed window in the center and further analyzed by FIJI ImageJ (Java 1.8.0_172). Arithmetic mean roughness (Ra) and root mean square roughness (Rq) were calculated for each image by the roughness analysis module.
Study of thermal effect
A 150-nm gold-coated MN array was placed in a Petri dish and immersed in 10 mL 1X DPBS (Cornings) at room temperature (25 °C). The MN array was soldered with a copper wire and connected to the DC power source. The infrared radiation image was recorded by a thermal camera (ETS320, FLIR). For the first 1 min, the temperature of the MN array was recorded without any voltages applied. For the following 1 min, the MN array was applied with a 2.5 V constant voltage. Temperature data points were taken every 5 s from the videos.
Elemental analysis of corrosion
The electrochemical corrosion of MN arrays (1.2 mm in length, 150-nm gold coated) was conducted at 2.4 V in 1X DPBS (Cornings). The triggering time was 0.5 min and 2 min for two MN arrays, representing the “transitioning” and “release” stages of the corrosion procedure. Optical and SEM images were captured, as shown in Fig. 2a .
The EDXS element mapping was conducted by the Scanning Electron Microscope (SEM, Hitachi S-4700). 20 kV was applied under analysis mode at a selected 100 * 120 μm 2 area, and the signal was collected for 400 s. For MN samples at the releasing stage, which have little gold coating left, a 5-nm Pd layer was sputter coated before characterization to increase the surface conductivity. The data analysis was automatically done by the INCA software (Oxford Instruments).
Mechanical strength measurement
A force measurement system (Mark 10, ESM 303) was used to study the mechanical strength of MNs. An MN array (9 or 25-needle, 1.2-mm, w/ or w/o 150-nm gold coated) was attached to the top sample holder by glue and double-sided tapes (Supplementary Fig. 15g ). The bottom sample holder was placed with a piece of glass to serve as a hard object. Both the glass and the MN array were placed as horizontally as possible. The sampling rate of the force gauge was set as “as high as possible”, and the moving speed of the sample holder was set as 13 mm/min, which was the lowest value. The sample holder will gradually descend to a point where the MNs were in contact with the glass and stopped manually when the MN array is fully crushed (Supplementary Fig. 15a, c ). Another soft contact experiment was also conducted under the same parameters except for the glass, which was replaced by 0.5% agar to mimic the brain (Supplementary Fig. 15e–f ).
Wireless power transfer analysis
A signal generator (SDG2042, Siglent) was used to provide a radio frequency (RF) signal to the inductive coil. Performance at several different frequencies (30–40 MHz) is examined to find the optimal transmission efficiency. The high-frequency alternating current is then fed into the inductive coil closely coupled with the receiving coil, which is connected to a rectifier and a power regulator to provide a DC voltage for MNs (1.2 mm, 150 nm gold coated) immersed in saline solution. The voltage is measured using a benchtop oscilloscope (SDS1204, Siglent). The impedance of the MNs is measured using a portable network analyzer (NanoVNA). The power dissipated by the MNs during the electrochemical process, ( ), can be calculated using both the voltage measurement (peak-to-peak amplitude, ) and the real part of the impedance under the frequency at which measurements ( ) are performed using the following equation:
Finite element analysis of crevice corrosion
The finite element analysis (FEA) of gold layer crevice corrosion on MNs was simulated by COMSOL 6.1. A model MN pair (1.2 mm, 3.5 mm in the distance) was set up to simulate the exposed surface of cathode and anode in the fluid. The tip of the MN was treated as a hemisphere (100 μm in diameter) to facilitate convergence, while the bottom face was set as a circle (270 μm in diameter). The surface of one MN was set as the cathode boundary, and the other was set as the anode boundary, as shown in Supplementary Fig. 6b–d . The entire simulation domain was set as a cubic (5 mm in length) space, including the two MNs. To simulate the physical and chemical properties of 1X DPBS, water (Water, liquid (mat1)) from the built-in database was chosen as the material of the cubic domain (not including MNs). The electroconductivity was set to be 1.6 S/m, which was a typical value provided by the manufacturer of PBS. Other surfaces of the model (except for the MN surfaces) were set as insulation in the boundary conditions. The stationary and transient simulations were performed based on secondary current distribution, which considered overpotentials while assuming homogeneous electrolytes. The anode equilibrium potential, according to literatures 85 , was set as 1.83 V (Au + |Au) to describe the electrochemical force needed to trigger gold oxidation. The cathode equilibrium potential was set as 0 V, and the electromotive force, which was the voltage of the external power source, was set as 2.5 V unless specified. The model was automatically meshed at the ultrafine level under physics-controlled mode. The current density distribution (Supplementary Fig. 6a ) and iso-potential surface (Supplementary Fig. 6b–d ) were simulated at the starting point of the crevice corrosion, which was stationary. The anodic polarization curve applying different electromotive forces (2.0–3.2 V) is simulated, as shown in Supplementary Fig. 6e . Transient simulation is conducted for the corrosion depth versus time, assuming the gold layer on the MN is thick enough. The corrosion depths from the starting time (0 s) to 60 s are monitored every 10 s (Supplementary Fig. 6f ) and visualized (Supplementary Fig. 6g ).
Immunohistochemical analysis
Mice were given a lethal dose of pentobarbital sodium (Sigma Aldrich), followed by intracardial perfusion with 4% paraformaldehyde (Sigma Aldrich) in PBS, as reported previously 86 – 88 . Then, the brains were dissected, post-fixed for 24 h at 4 °C, and cryoprotected with a solution of 30% sucrose (Fisher Scientific) in 0.1 M phosphate buffer (pH 7.4) (Sigma Aldrich) at 4 °C for at least 24 h, fully submerged. This was followed by cutting into 40-μm sections, washing three times in PBS, three 5-min incubations in 1 mg/ml –1 sodium borohydride (Honeywell Fluka) in PBS, then 1-h incubations in 1% Triton-X-100 (Sigma Aldrich) in PBS. A blocking step was then performed using 5% donkey serum (Sigma Aldrich) in 0.3% PBST (phosphate-buffered saline with Triton X-100) for 1 h. Brain sections were then incubated for ∼16 h at 4 °C in a blocking buffer containing goat anti-GFAP of Santa Cruz Biotechnology (1:1000, Cat# sc-6170) and rabbit anti-Iba1 of Fujifilm Wako (1:500, Cat# 019-19741). Sections were then transferred to a secondary antibody solution containing Fisher Scientific Alexa Fluor 647 donkey anti-rabbit IgG (1:1000, Cat# A32795), Alexa Fluor 568 donkey anti-goat IgG (1:1000, Cat# A11057) and Neurotrace 435/455 Blue Fluorescent Nissl stain (1:100, Cat# N21479) in 0.1% PBST for 1 h at 24 °C, with intermittent brief periods of shaking. Sections were washed three times for 30 min each in 0.1% PBT, with 1 μM DAPI (Invitrogen) solution included on the third wash step. After rinsing, slices were dried on a piece of slide glass and coverslipped. All brain slices were imaged with an Olympus FV3000 microscope and obtained by FV31s-VW software. All images were processed with the same settings using the Fiji software by ImageJ (Java 1.8.0_172).
Stereotaxic surgery
Mice were anesthetized under 1.5–2% isoflurane in oxygen (Baxter Healthcare) at 0.8 LPM flow rate. Probes covering melatonin or vehicle were implanted unilaterally into the medial prefrontal cortex (anteroposterior (AP): −1.5 mm, mediolateral (ML): ±0.3 mm, dorsoventral (DV): −2.1 mm). Then, mice were chronically implanted with EEG/EMG electrodes for polysomnographic recordings. The electrodes consisted of two stainless steel screws connected to EEG Teflon-coated wires, which were inserted through the skull, and two EMG Teflon-coated wires that were bilaterally placed into both trapezius muscles. All of the electrodes were fixed to the skull with dental cement and attached to a microconnector 89 . The scalp wound was closed with surgical sutures, and each mouse was kept in a warm environment until it resumed normal activity as previously described 90 , 91 .
Polygraphic recordings and vigilance state analysis
All mice were recorded by an EEG/EMG polysomnographic system of Pinnacle. The recordings of EEG and EMG were performed by means of a specially designed slip ring so that the behavioral movement of the mice would not be restricted. First, cortical EEG and EMG signals were amplified and filtered (EEG, 0.5–30 Hz; EMG, 20–200 Hz) and then digitized at a sampling rate of 200 Hz and recorded by using Sirenia Acquisition (Pinnacle). When completed, polygraphic recordings were automatically scored off-line by 4 s epochs as wakefulness, REM, and NREM sleep by SLEEPSIGN (Kissei Comtec, Nagano, Japan) according to standard criteria 92 . As a final step, defined sleep-wake stages were examined visually and corrected, if necessary. The data related to EEG/EMG were analyzed by GraphPad Prism 8 and Matlab R2022b.
Gold layer thickness analysis
To analyze the uniformity of the gold layer, we prepare two MN samples: (1) a 1.5 mm MN with 150 nm gold layer is broken in half after freezing (Supplementary Fig. 10a–c ); (2) a 1.2 mm MN with 100 nm gold layer undergoes partial Au film exfoliation (Supplementary Fig. 10d–f ). By SEM, the cross-section images of the MN reveal the detailed structure of the outer gold layer. Supplementary Fig. 10c shows the cross-section of the gold layer (colored orange), which is approximately estimated at 160 nm in width. Considering the existence of a diffusion layer (where gold atoms mix with polymer), the error in thickness is acceptable. Namely, Supplementary Fig. 10f shows the thickness of the exfoliated gold film, roughly calculated to be ~ 95 nm, after perspective correction. Both samples show good uniformity of the thickness of the gold layer, which provides good encapsulation performance. Furthermore, we utilize AFM to characterize the surface roughness of the coated gold layer on the polymer substrate, as presented in Supplementary Fig. 10 . The average and root-mean-square (RMS) roughness is calculated to be less than 3 nm, which meets the requirement of thickness uniformity.
Immunohistochemical analysis (additional)
Brain sections were then incubated for ∼16 h at 4 °C in a blocking buffer containing goat anti-GFAP of Santa Cruz Biotechnology (1:1000, Cat# sc-6170), rabbit anti-Iba1 of Fujifilm Wako (1:500, Cat# 019-19741), and Chichen anti-Neuron Specific Enolase (1:100, NSE Millipore Cat# AB9698).
All data values were presented as the mean ± standard deviation (SD). For analyzing fluorescence density and mean of peak value in the region of interest (ROI), we define the ROI as the areas within 200 μm lateral to the edge of the probes. One-way ANOVA was used to compare the fluorescence density and mean of peak value between different experimental groups.
As is well known, GFAP and IBA-1, representing astrocytes and microglia, respectively, reflect the inflammation in the brain tissues 93 , 94 . Neuron-specific enolase (NSE), an acidic protease unique to neurons and neuroendocrine cells and a sensitive indicator for assessing the severity of nerve cell damage and prognosis, is widely used to reflect neural injuries and pathological processing 95 .
Stereotaxic surgery (additional)
For in vivo photometry recordings, mice were unilaterally injected with 250 nl of AAV5-CaMKIIG-CaMP6f mixed with 5 nl AAV5-CaMKII-tdTomato (from UNC Vector Core) into the mPFC at these coordinates AP: −1.5 mm, ML: +0.3 mm, DV: -2.1 mm. Optical fibers (Newdoon Inc, O.D.: 1.25 mm, core: 200 mm, NA: 0.37) were implanted 0.2 mm above the mPFC (AP: -1.5 mm, ML: +1.4 mm, DV: -1.9 mm). At the same time, MN probes were implanted ipsilaterally into the medial prefrontal cortex at 30 degrees lateral to the optical fiber.
Fiber photometry system
The multi-fiber photometry system was used as previously described 96 , 97 . Briefly, the system consisted of a 488 nm excitation laser, a fluorescence cube, and a spectrometer. The 488 nm laser beams first launched into the fluorescence cube and then into the optical fibers. The GCaMP and tdTomato emission fluorescence collected from the fiber probe traveled back to the spectrometer. Only animals with strong GCaMP and tdTomato expression were included in the study (Supplementary Fig. 22a ). Spectral data was acquired by OceanView software (Ocean Optics, Inc) at 10 Hz and was synchronized to a 20 Hz video recording system to acquire the animal behavior.
The in vivo recordings were carried out in an open-top home cage (21.6 “ 17.8 “ 12.7 cm) in the 30 Lux red light environment. Laser power was adjusted to a final optical fiber output of 30 mW. Photometry data were exported to MATLAB R2014b for analysis. Coefficients of GCaMP6f and tdTomato were unmixed by a customized script by fitting spectrum signals to standard emission curves. GCaMP6f signals were normalized by tdTomato signals for motion correction. (Supplementary Fig. 22a ). A 0.1 Hz high-pass filter corrected the fluorescence bleaching. Photometry signals (ΔF/F) were derived by calculating (F–F0)/F0, where F0 is the median of the fluorescence signal (Supplementary Fig. 22b ). For the home cage analysis, we recorded data for 5 min per mouse and calculated the ΔF/F to further analyze the correlation of SuM and DG signals in raw data (Supplementary Fig. 22c ). Cumulative activity = ΔF/F х Time. The average of the peak of ΔF/F above 2 SD during 5 mins in the home cage was calculated.
PLGA MN accelerated degradation
We conducted an accelerated degradation experiment of PLGA MN (3 mm, without drug payload) by soaking it in 65 °C 1X DPBS (Cornings). We observed changes in the shape and morphology during the immersion (Supplementary Fig. 23 ). The MN was taken out of the solution for observation after 30 min, 1 h, 6 h, and 24 h. It is clear that, after 24-h degradation, the main body of PLGA MN collapsed. This experiment simulates the chronic degradation of PLGA in the biofluidic environment under ambient conditions, showing significant degradation of PLGA MN.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. | Results and discussion
Configuration and characterization of the spatiotemporal on-demand patch (SOP)
Figure 1a highlights the design of an SOP, which consists of two primary parts: i) a drug-loaded microneedle (MN) patch protected with an electrochemically triggerable metal layer (gold) as the drug delivery interface; ii) a near-field communication (NFC) module for the wireless control of triggering signals in controlling location and schedule of drug release. A flexible printed circuit board (PCB) provides interconnected traces that integrate the two parts to form a fully wireless, wearable drug delivery system. The main body of the drug delivery interface (MN arrays) uses poly(D, L-lactide-co-glycolide) (PLGA, Mw = 50 ~ 75 kDa, ester terminated, Sigma Aldrich) as the matrix materials that can undergo bulk erosion upon contact with biofluids to generate biological benign byproducts (lactic acid and glycolic acid) 53 . As shown in Fig. 1b , the fabrication relies on sputter deposition to coat a thin layer of gold (thickness 150 nm) onto the drug-loaded MN arrays supported by a PLGA base. The gold encapsulation layer is supported on the surface of solid MNs, which enables sufficient stability with a thinner thickness (150 nm) compared with those used in the reservoir designs (thickness of gold layer more than 300 nm) as reported previously (as illustrated in Supplementary Fig. 1 ). Thus, the thickness to realize effective drug encapsulation can be smaller. Ablation using a laser beam defines the gold traces that connect with separated MN domains to realize spatial control of drug release. A thin polydimethylsiloxane (PDMS) (thickness 10 μm) covers the gold layer at the circuity base regions. It exposes only the MN regions to allow physical contact of the gold layer on the MN with biofluids.
Fabrication of the SOP relies on a low-cost solution-molding procedure (Supplementary Fig. 2 ). The process starts with UV laser ablation to define an MN mold from a PDMS pad. Drop casting drug-loaded PLGA solution into the PDMS mold and setting under vacuum allow the entrance of PLGA into the negative MN molds. Sufficient solidification over 8 h allows easy extraction of the PLGA MNs. Then, the PLGA MNs undergo sputtering deposition to coat a 150 nm-thick gold layer that conformally covers the top surface, followed by patterning with a laser ablation system to define control circuits. Drop casting a thin PDMS layer (thickness 10 μm) onto gold layers protects the control circuits and exposes the MNs for drug release. The MN patch is then attached to a flexible PCB and connected with a current regulator, a wireless energy harvester, and a microcontroller to complete the fabrication process. The solution-molding method can produce SOPs with: (1) tunable MN lengths from 600 μm to 3 mm and aspect ratio from 3 to 8; (2) high dimension uniformity in both length and base diameter (Supplementary Fig. 3 ); (3) arbitrary MN array configurations (square, hexagonal, single-needle, etc.). Figure 1e – h shows that the morphology and shape of the PLGA MNs remain stable during both gold deposition and drug loading procedures.
Figure 1c , d illustrates the overall working mechanism to realize high-precision drug delivery by our SOP. The electrically triggered crevice corrosion of the gold protective layer is the switch for initiating drug release of specific collections of SOP MNs. Upon SOP deployment on the skin, the MNs stay at a standby stage with the drug fully protected by the gold layer. Once a direct current (DC) electrical trigger (2.2–3 V) is applied, the electrochemical crevice corrosion starts to occur on triggered MNs, transitioning them from standby mode to releasing mode. After a short period (<30 s when 2.5 V is applied), the gold protective layer on the MN is fully dissolved, and MNs are exposed to the bio-environment to enable drug release. The compatible integration of microcontrollers allows the electrical triggers to be manipulated at a precise time point for precision delivery. By patterning gold layers via microfabrication, the SOP can realize the spatial profile of drug release at a high spatial resolution (~1 mm 2 ). A built-in anode is integrated into the SOP to complete the circuitry for the in vivo electrochemical crevice corrosion.
Characterization of electrically triggered crevice corrosion
Figure 2 demonstrates the characterization of SOP on its active control of drug release (based on MNs, 1.2 mm in height, 150-nm gold coated). The operation of drug-release control includes three stages: (1) standby stage when the MNs are fully coated with gold (labeled as 0 min); (2) transitioning stage when the electrical trigger is activated, and the gold layer is partially dissolved (labeled as 0.5 min); (3) releasing stage when the gold is fully dissolved, and MNs are exposed to the biofluids (labeled as 2 min). 1X Dulbecco’s phosphate-buffered saline (DPBS, Corning) is used here to simulate the body biofluid. The experiment is conducted at room temperature and triggered by a 2.5-V DC. Figure 2a shows optical and SEM images of the MN arrays at different stages, which indicates a noticeable change in surface color and roughness associated with the electrically triggered crevice corrosion. The results demonstrate that the main structure of the MN stays stable during the transitioning stage, and the electrical triggers effectively dissolve gold into biofluids and sufficiently expose the core MNs. More characterizations from different perspectives appear in Supplementary Fig. 4 .
The electrochemical crevice corrosion of the gold layer can be triggered by a direct current potential within 2 min, which is of clinical relevance for a timely response in drug administrations. A quantitative amperometry study of current density under different potentials is conducted to understand the electrical triggering behavior. Figure 2e shows I-V measurements of SOP triggering with potentials bias ranging from 2.0 V to 2.8 V applied to the MN arrays (1.2 mm in length, 150-nm gold coated). A steep drop in current density appears at 15 s of application for 2.8 V bias, indicating the endpoint of the electrochemical crevice corrosion. The time of the current-drop appearance increases as the potential bias decreases. The time consumption from the beginning to the end of the current is defined as the effective corrosion time plotted against the potential in Fig. 2f . The fitting is based on the Butler-Volmer equation where is the exchange current density, is the Faraday constant, is the overpotential, is the ideal gas constant, is the thermodynamic temperature, and is a coefficient with values ranging from 0 to 1, is the number of electrons in the anodic half reaction.
This equation describes the relationship between potential difference and reaction rate 54 . At an ambient potential of 2.5 V, the crevice corrosion on MN is triggered within 30 s. This parameter is applied in the following experiments. We hypothesize that the crevice corrosion driven by constant voltage includes two parts: anodic oxidation and mechanical crevices. The 2.5-V potential between the anode and cathode is sufficient for triggering gold oxidation coupled with hydrogen emission reaction (HER) at ambient conditions (1X DPBS, 25 °C). It is reported that an anodic potential larger than 1.95 V (vs standard hydrogen electrode, SHE) is high enough to drive the below-surface oxidation of gold film 55 . This corresponds with our observations, as the crevice corrosion is triggered at a potential above 2.0 V, even though the HER on the counter electrode is not under the standard condition. The oxidated gold species in neutral and alkaline environments may include oxides, hydroxides, and free elements, mostly in the form of nanoparticles (NPs) 56 , 57 , that collectively in the amount used here are benign to the human body 58 . Supplementary Fig. 5 provides a schematic illustration of the two-electrode system of SOP (Supplementary Fig. 5a ) and the atomic scale illustration of the anodic oxidation (Supplementary Fig. 5b, c ) on Au (111) facet in a neutral environment, with reaction products, Au 2 O 3 in the form of nanoparticles. The anodic behavior is also studied by finite element analysis (FEA), including reaction potential, reaction rate, current, and potential distribution (Supplementary Fig. 6a–d ). The simulation results show that the reaction rate at 2.5 V is 281 nm/min (Fig. 2h, i ), which is consistent with the experimental observations (150 nm in ~ 30 s). The anodic polarization curve under the condition of standard potentials and primary current distribution also shows a minimal triggering potential at around 2.0 V (Supplementary Fig. 6e ). However, the anodic oxidation may not account for the entire loss of the gold coating. Based on the 2.6-V amperometry experiment (Fig. 2e ), the theoretical weight of the oxidated gold is calculated as 122 μg. The actual value should be lower because water splitting (oxygen generation), as the side reaction, may also contribute to the current density. This amount of gold NPs does not constitute a health hazard since it is much lower than the safe exposure threshold, which is reported as 5 mg/ml 58 . Meanwhile, the total weight of the diminished gold film on the MN array is calculated as 290 μg, which is significantly higher than the oxidated amount. This is because the major part of the gold layer is not oxidated but exfoliated from the surface due to crevices and cracks. These defects come from the weakening effect on gold membranes as they become thinner during corrosion. This kind of crevice on metal film is generated by an electrical current, which is reported in some previous studies 43 , 45 .
A gold-on-wafer experiment validates the presence of mechanical crevices and exfoliations. The surface roughness of the crevice corrosion of the gold layer is also studied with the gold-on-wafer ({100} facet, 150-nm, 1 cm 2 square) model (Supplementary Fig. 7b , c ). The gold layer is connected to a power source, and crevice corrosion is triggered at 2.5 V in 1X DPBS. The surface roughness of gold is calculated with FIJI ImageJ on optical images collected at various stages of corrosion. As shown in Supplementary Fig. 7d , the surface roughness is monitored every 0.5 min for 5 min from the initial stage, then every 1 min from the 5-min stage to the 9-min stage. A sharp increase in surface roughness is observed in the first 1 min right after the crevice corrosion is initiated (Supplementary Fig. 7a ). This is consistent with the amperometry study that most electrochemical corrosion happens within 1 min under this potential. The roughness increase indicates the gold layer’s exfoliation from the wafer, which is also observed in the MN corrosion from an SOP. It can be concluded that a significant part of the gold layer is not directly oxidized but exfoliated during the electrical triggers. This corresponds with the previous study using gold film as the control gate for implanted drug reservoirs 44 .
To characterize the electrical trigger process, we select two different areas on an SOP MN: the tip area and the waist area, as shown in Fig. 2d . The structural difference between the two areas is the surface curvature, with the former being 1.37 mm −2 and the latter close to 0. The waist area stands for the MN’s major surface, where most drugs are released. The energy-dispersive X-ray spectroscopy (EDXS) mapping is carried out on three stages of MNs from both areas as a semi-quantified surface analysis (SI Figs. 8 and 9 ). Oxygen, carbon, and gold are selected as elements of interest, where oxygen and carbon show the exposed polymer body while gold shows the encapsulated surface of the MN. The ratio quantification is based on the relative weight percentage of only these three elements despite the existence of other elements. As shown in Fig. 2b , c , both the tip and the waist areas show similar trends in element-relative ratios. An increase in oxygen, carbon, and a decrease in gold (all by weight%) are observed. The waist area, which stands for the main body of an MN, shows a more remarkable change in element constituents, indicating a complete corrosion of triggered crevice for the gold layer. The tip part of MN is still partially capped by a small amount of gold at the releasing stage. This part of the gold layer is isolated during the crevice corrosion process and accounts for the rather high remaining gold shown in Fig. 2b . The left gold covering the tip area is too small to hinder the overall drug release from the entire MN. Silicon and copper are also observed in EDXS mapping, which comes from PDMS residuals and conductive wires used during the experiment. More detailed information appears in the Supplementary Information on the EDXS element mapping.
Figure 2g shows the thermal characterization of SOP during the electrochemical corrosion to validate the thermal safety. The experiment uses an MN array (1.2 mm in length, 150-nm gold coated) connected to a 2.5-V DC power to undergo crevice corrosion in 1X DPBS. A FLIR thermometer records the temperature of the MN array with and without current in a 25 °C environment. The results show no noticeable change in temperature during the electrochemical corrosion, which indicates a low possibility of tissue damage from heat effects.
Figure 2j and Supplementary Fig. 10 show the uniformity of the gold layer on MN prior to crevice corrosion. We also use atomic force microscopy (AFM) to characterize the surface roughness of gold layer deposited on polymer to illustrate the uniformity of deposition thickness by sputter coating, as shown in Supplementary Fig. 11 .
Release control and electrical triggering of microneedle patch
Figure 3a demonstrates the encapsulation performance of gold coating on MNs. Here, Rhodamine B (Thermo scientific), a fluorescent dye, is loaded into the microneedle patch (0.3% in weight, ~90 ng per MN) to simulate small-molecule drugs, which can be subsequently quantified by UV-Vis spectroscopy. Figure 3a, b show the release profile of Rhodamine B from a bare MN patch (1.2 mm in length) and an MN patch (1.2 mm in length) with a 100-nm gold coating. The release study is carried out in 45 °C 1X DPBS. Significant color fading on the MN patch without gold coating, as shown in Fig. 3a , and the apparent increase in absorbance (blue, without gold coating, Fig. 3b ) indicate a successful release of dye into the biofluids. The average release rate of Rhodamine B in an hour is calculated as ~ 415 ng/min. In contrast, the controlled experiment (black, with gold coating) shows no significant change in absorbance (Fig. 3b ), suggesting excellent protection of encapsulated drugs from release. The calculation of the accumulative release amount is based on a calibration curve of Rhodamine B solutions and Beer-Lambert Law. Detailed explanation is provided in Supplementary Fig. 13 and SI Table 1 . We further analyze the stability of gold coating in a 0.5% agar model to better simulate the mechanical properties of animal tissue. The MN array (1.2 mm in length, 150-nm gold coated) remains stable during a 2-week soaking test in the agar model without significant changes in shape or surface morphology (Fig. 3a ). SI Figs. 14 and 15e , together with Fig. 3a , show the stability of gold film on MNs against soaking in biofluids and mechanical friction in tissues. This indicates that the bonding between gold and PLGA is strong enough to meet the need for implantation, which corresponds with previous research on Au-polymer adhesion 59 – 61 .
Figure 3c demonstrates the wireless design of a SOP. An external power source—in this case, a signal generator—is connected to an inductive coil to provide a high-frequency (~MHz level) alternative current (AC). The inductive coil is paired with the receiving coil on the device to achieve wireless power transfer via magnetic resonance coupling. The AC is then converted to direct current using a full-bridge rectifier. A 2.5-V regulator then regulates the DC to provide a stable potential that facilitates the electrochemical crevice corrosion of the gold layer on MNs. The configuration of the wireless SOP is illustrated in Fig. 3d . The wireless SOP consists of an energy harvesting module for wireless power, a power amplification module, a System-on-Chip (SoC) module for remote control, and an MN array coupled with counter electrodes (CE). Within the energy harvesting module, a receiver coil, a full-bridge rectifier, and a regulator are used to provide stable direct current (DC) output. The output is coupled with a 0.1-μF capacitor to constitute a low-pass filter, improving output stability. An equivalent circuit diagram is provided in Fig. 3g , which emphasizes the energy harvesting module while simplifying the power amplification and SoC. A complete circuit diagram can be found in Supplementary Fig. 16 . Supplementary Fig. 17 shows the performance and impedance analyses of the energy harvesting module. The optimal signal input is determined as around 15 V peak-to-peak at 15 MHz. The measurements shown in Fig. 3e validate the output stability of wireless SOP with the regulator. The final output power signal is stabilized at around 2.5 V with a standard deviation of ~0.1 V, ensuring precise crevice corrosion control for initiating drug release. The input signal’s optimal frequency is around 36 MHz, as shown in Fig. 3f .
Temporal and spatial control of SOP triggering
To further investigate the temporal and spatial controllability of electrical triggering on the MN patch (Fig. 4a ), we design a multi-domain SOP to realize the stepwise, on-demand release. The multi-domain SOP consists of 7 domains of MN arrays (1.2-mm in length, 150-nm gold coated). The gold layer on the PLGA patch is patterned by laser ablation to enable separate triggering of individual MN domains. A layer of PDMS (~10 μm) is then applied onto the patch except for the hexagonal MN regions to protect gold interconnects from dissolving during electrical triggering. Figure 4c shows the electrical triggering schedule for four of the seven MN domains for the SOP loaded with Rhodamine B (0.3% by weight) during its immersion in 65 °C 1X DPBS as an accelerated study. The electrical triggers use 2.5-V DC bias for 30 s at every 30-min interval of immersion. Following each interval, immediate sampling of the environmental fluids allows quantitative estimation of drug-release dosage using UV-Vis spectroscopy, as shown in Fig. 4b . The measurements show a multi-step increase in spectral absorbance, indicating the stepwise increase of drug dosage and confirming the on-demand drug release at the desired time (0, 30, 60, 90 min, Fig. 4b ). Figure 4d demonstrates the staged release of MN domains by selectively dissolving the gold encapsulation layer with an electrical trigger (Supplementary Movie 1 ). The red dash line circles the specific MN array triggered from each stage. The results confirm that our SOP realizes both temporal and spatial control of drug release using digital electrical triggers.
Supplementary Fig. 18 demonstrates the capability of ultrafine spatial control of drug release. Here, we design a miniaturized SOP with a single domain consisting of 8 MNs (1.1 mm in length, coated with 150 nm-thick gold) with 1–3 mm in spatial separation. With a specific design of gold circuits, each MN in the domain can be triggered individually (as illustrated in Supplementary Fig. 18a ). Supplementary Fig. 18b shows that each MN follows sequential release via electrical triggering (2.2-V DC) within 15 s without interfering with adjacent MNs. Patterning techniques on the gold layer primarily dictate the spatial resolution of release control for SOP.
Demonstration of SOP in intracranial drug delivery
Beyond the clinical applicability for transdermal drug delivery, the SOP could show impactful utility in facilitating animal behavior studies. Here, we demonstrate intracranial delivery of melatonin using SOP for an animal sleep study. Melatonin, a hormone naturally produced in the brain by the pineal gland, plays a crucial role in regulating the sleep-wake cycle while also participating in other regulations like blood pressure and body temperature 62 – 64 . However, people found many strains of nocturnal laboratory mice are deficient in the release of melatonin, including C57/B6 mice 65 , 66 , and the effects of melatonin are still controversial 67 – 69 . Here, we use C57/B6 mice and implant the SOP to test how exogenous spatiotemporally controlled release of melatonin in the deep locations of the cortex, which is mainly distributed by melatonin receptor 1 70 , would affect the sleep-wake cycle. The high spatiotemporal controllability of melatonin release offered by the SOP may open up new opportunities to understand regional brain responses to melatonin and study the pathology of sleep-wake disorders. Loading melatonin into MNs of the SOP follows the solution fabrication method described in Supplementary Fig. 2 . Melatonin that dissolves in acetone can be mixed with the precursor PLGA solution, which ensures the loading dosage (10–35 wt%) for each MN. The 3-mm MNs are fabricated with the PLGA-melatonin ratio from 10:1 to 2:1 (Fig. 5a ). In addition, the drug can be concentrated around the tip of the MN, which ensures deep-brain delivery for mice, as shown by the dash-red line in Fig. 5a . Based on the mixing ratio, the payload of melatonin per MN is estimated to be 22.2 to 81.7 μg, comparable to recommended dosages for mice (4–20 mg/kg) 71 , 72 . The drug payload is modulated by loading different PLGA solutions during the mold casting procedure. Furthermore, the mechanical property of the 3 mm-tall MNs used for intracranial delivery is characterized by a fracture test (Supplementary Fig. 15 ). The ultimate strength of PLGA MN (1.2 mm in length) is determined by the first fracture point in the force-displacement graph, as labeled by the red frame (Fig. 5b ). The fracture point, recognized by a sudden drop in measured force, corresponds to the initial fracture of the MNs, followed by multiple subsequent fractures appearing in different locations of the MNs (Supplementary Fig. 15b , d ). The maximum mechanical strength is derived to be 118 MPa, which is sufficiently rigid for human skin penetration 73 . The calculation considers the pressure measured at the first fracture point, while the contact area of the MN is approximated based on the tip diameter (around 30 μm, Supplementary Fig. 3f ). Another concern is the stability of the gold layer during implantation. Supplementary Fig. 15e shows the PLGA MN (150 nm gold coated, 1.2 mm) before and after the penetration test. No significant changes can be observed based on the comparison of optical images. Figure 3a also proves the stability of the gold layer during the soaking test. In addition, we use the MN array (150 nm gold coated, 1.2 mm) to penetrate the tissue of the chicken thigh for multiple cycles, as presented in Supplementary Fig. 14 . The gold layer remains stable after 20 cycles of penetration.
Deployment of the melatonin-loaded SOP in live animal models as they move in a caged environment shows possibilities for actively controlling melatonin release to the deep-brain regions of the parietal lobe (Fig. 5g ). To ensure device stability, the SOP is coupled with a custom headstage that can be firmly mounted onto mice heads (Supplementary Fig. 19 ). Figure 5h shows an immunohistochemistry analysis of brain tissues from the mice at various recovery stages following SOP implantation. On day 1 of post-implantation, the brain tissues in close proximity to the SOP MN show structural damage resulting from mechanical forces during intracranial surgery, which is typical for general brain implantation 74 . As the mice recover from the implantation, the levels of GFAP (red) and IBA (white) show a significant decrease in concentration and staining range, indicating excellent biocompatibility of SOP. The tissue in contact with the MN becomes smoother, with fewer rough edges. An obvious increase in neuron regeneration (neurotrace, green) can be observed, indicating good recovery from implantation surgery.
Supplementary Fig. 20 shows an in vivo study to validate the safety and biocompatibility of SOP in intracranial deployment. Here, the study compares four groups of MNs (bare MN (MN), gold-coated MN (Au-MN), melatonin-loaded MN (Mel-MN), and melatonin-loaded MN with gold coating (Au-Mel-MN)) in their effects on the inflammation and degree of neural damage after one-month implantation. Supplementary Fig. 21a shows typical images of MN implanting sites with GFAP/IBA/NSE/DAPI multiple staining. As a result, the number of astrocytes and microglia in the region of interest (ROI) is significantly lower in the Au-MN or Au-Mel-MN group compared to the bare MN group, which indicates that gold-coated MNs have less damage to neural tissues and cause less inflammation compared to bare PLGA MNs. In addition, the fluorescence density of the markers has no difference between bare and gold-coated MNs, which indicates that the Au encapsulation on MNs does not affect the expressing strength of the GFAP or IBA signals (Supplementary Fig. 21c ).
Furthermore, we used a fiber photometry system to investigate the neural activity in the medial prefrontal cortex (mPFC) after the implantation of MNs (bare MN, Au-MN, control (no MN), respectively, Supplementary Fig. 22 ). As shown in Supplementary Fig. 22c , there is no observable difference in Ca 2+ dynamics of mPFC excitatory neurons between the control group, bare MN, and Au-MN groups, which indicates the MNs have no significant damage to the brain tissue and introduce little influence on the neuronal activity.
Besides, we study the bioresorbability of the MN. Though the degradation of PLGA is much slower than gold 75 , we show this process of PLGA in an accelerated degradation experiment in 65 °C phosphate-buffered saline (PBS), as illustrated in Supplementary Fig. 23 .
We further use the in vivo animal models to characterize the SOP functional performance. Here, an additional two recording electrodes are inserted adjacent to the location of SOP implantation, as shown in Fig. 5c . First, a set of DC electrical triggers (5 s duration, 2.5 V) is delivered and recorded (5 mm from the stimulation electrode), as shown in Fig. 5e . The crevice corrosion of the gold layer (thickness 150 nm) on the MN can be completed by applying 5–7 times the 5-s triggers to allow melatonin release. Then, a series of short pulse stimulation experiments are examined (Supplementary Fig. 24 ). The pulse signals vary from 10 mV to 50 mV in amplitude and 1 Hz (duration 10 ms) or 10 Hz (duration 1 ms) in frequency. The signal amplitude-recording distance relationship is also studied based on 10-ms pulses of 50 mV (Fig. 5f ) and proved to affect a 5-mm area mainly. No significant changes are observed on the gold layer of MN after short pulses, as demonstrated in Fig. 5d , indicating electrical signals generated from neuron cells induce negligible damage to the gold protection layer of the SOP. Furthermore, the results demonstrate the gold-coated MNs can also be used as stimulation electrodes for the delivery of low-amplitude (10–50 mV), pulsatile signals (Fig. 5f ), which may serve as a strategy for neuronal regeneration 76 , 77 .
Assessment of in-brain drug delivery by microneedles
To investigate the gold encapsulation’s performance in controlled drug release from the MN, we fabricated two groups of melatonin-loaded (25 wt%) MN with the exact specification shown in Fig. 5a . For effective comparison, the first batch of MNs (Mel-MNs) has no gold encapsulation layer, while the second batch of MNs (Au-Mel-MNs) is coated with 200 nm of gold on the MN surface.
First, we conduct the in vivo drug release experiment with the Mel-MNs and the blank control (MNs without loading melatonin) by stereotaxically implanting the MN into the medial prefrontal cortex (mPFC) of mice (one MN in each, Fig. 6a ). After one week of recovery, we use the Pinnacle sleep system (Sirenia Acquisition) to record EEG and EMG analyses. From 19:00 (active period) to 22:00, the power density of the Mel-MN group is compared with those of the control group (Fig. 6b ). The total amount of NREM sleep increased by 42.0%, and the amount of wakefulness decreased by 27.9% over the 3-h period compared to these parameters in the control group (Fig. 6c, d ). The sleep status is calculated from the EEG/EMG traces and the corresponding hypnograms (Supplementary Figs. 25 – 29 ). The experiment confirms the effective release of Melatonin from Mel-MNs, which can successfully modulate the sleep behavior of mice over a week.
Second, we conduct similar in vivo experiments to investigate the feasibility of release control with gold encapsulation from the Au-Mel-MN. Here, we apply the Au-Mel-MNs by stereotaxically implanting them into the medial prefrontal cortex (mPFC) of mice (one MN in each, Fig. 6e ). For effective comparison, the EEG and EMG recording starts one week after implantation, following the same way as described above. Figure 6f shows the power density of the Au-Mel-MN group and control group recorded from 19:00 to 01:00 am (active period), indicating that the delta power density of the Au-Mel-MN group was significantly higher ( P < 0.01) than that of the control group. The significantly enhanced delta band EEG power indicates a deeper sleeping degree induced by melatonin, compared with the Mel-MNs experiment (Fig. 6c, d ). The total amount of NREM sleep increased significantly by 36.7%, REM sleep increased by 57.8%, and that of wakefulness significantly decreased by 26.0% over the 6-h period, compared to these parameters in the control group (Fig. 6g, h ). Essentially, melatonin release in the mPFC induces NREM sleep amount increases in active periods and enhances delta power density during NREM sleep even more significantly than that shown in the Mel-MN group. The delayed release effect mainly explains this due to the existence of gold encapsulation. As a result, the releasing peak of melatonin appeared after the controlled degradation of the gold protection layer. In contrast, the Mel-MN group releases melatonin right after implantation, leading to a weakened effect at the time point of recording (one week post to recovery). Collectively, these in vivo experiments further demonstrate the programmed drug release performance of SOPs in freely moving animals and suggest its potential utility in animal behavior studies. We revealed that programmed drug release of exogenous melatonin in the mPFC would improve NREM sleep, REM sleep, and delta power density, which may represent a novel target for treating sleep disorders.
In this study, we present an on-demand drug-delivery patch that can be digitally controlled to enable delivery precision in both space and time. The spatiotemporal on-demand patch (SOP) uses bioresorbable microneedles with high-aspect ratios (3–8) as the drug-loading vehicle and a thin layer of gold (thickness 150 nm) as a release gate that can be digitally controlled with a small electrical trigger (2.5 V for 30 s). This design allows fully active control of drug release at a single-microneedle level with spatial resolution, demonstrated here, less than 1 mm 2 , highlighting the enabling capabilities over most existing drug-delivery devices. This spatial resolution allows more than 20 doses of the drug to be housed within a thin, wearable patch of 1 cm 2 , ensuring comfortable and convenient user adherence for repetitive pharmaceutical treatment over extended periods. The fabrication is compatible with the microfabrication process, which could further decrease the spatial resolution to micrometer scales. The on-demand, rapid-response drug release can be enabled within 30 s following an active electrical trigger.
The fabrication procedure of our SOP uses a simple solution-molding method, offering a low-cost approach with high dimensional customizability. Using laser ablation, microneedles, ranging from 0.6–3 mm, can be manufactured efficiently and in high quality to fit a wide range of drugs (melatonin with controllable payloads 78 . Moreover, the solution-based mold process allows drug-load PLGA MNs to be produced at scale with similarly high quality. The process allows convenient compatibility for integrating electronic modules to enable digital automation in drug delivery.
The multifunctionality of drug delivery and stimulation therapy could synergistically create advanced therapy as potential future avenues of neuroscience research. The SOP presented in this work exhibits continuous sustained drug release feature that meets the need to treat various chronic neural diseases apart from sleep disorder, especially Alzheimer’s disease (AD) 50 . As a multiphase neural disease that affects most areas of the brain 51 , 52 , the treatment of AD requires different drugs at different stages. For example, acetylcholinesterase inhibitors (AChEIs) are currently the mainstay of treatment for mild-to-moderate AD. In contrast, memantine is approved for mild-to-moderate dementia as a non-competitive N-methyl-D-aspartate (NMDA) receptor antagonist that prevents the overactivation of neuronal NMDA receptors. At the same time, the combination of these two drugs also significantly improves cognitive function in moderate-to-severe AD 79 . The SOP can enable precise joint delivery of multiple drugs in desired brain regions to achieve enhanced treatment. Combined with a burst release design (like hollow microneedles), the SOP also fits for rapid drug delivery as a timely response towards acute neural diseases, such as cataplexy and epilepsy 80 – 82 . The remotely powered SOP allows for active drug delivery at a particular time and space coupled with real-time behavioral study, such as sleep-quality characterization, which provides a convenient method for drug-performance analysis. The high-resolution and controllability features of SOP make it suitable for such kind of regional and dosage-sensitive treatment. In addition, many drugs for brain disorders may have detrimental effects on other organs if administered systemically 83 . The regional drug release feature of SOP enables targeted drug delivery that minimizes systemic exposure, reducing the risk of unwanted side effects in non-brain tissues.
Moreover, our SOP is capable of wireless operation via near-field communication or, potentially, Bluetooth Low Energy. The agar-soaking test, fracture test, and in vivo intracranial delivery experiments validate the SOP’s practical functionality and biocompatibility. Furthermore, gold-coated MNs can extend beyond drug-releasing control, as the SOP can offer in vivo electrical stimulation. These concepts establish unique approaches in high-precision drug-delivery technologies with additional utilities in advancing fundamental studies of disease pathology (such as cancer metastasis) and neuroscience research, as demonstrated by both benchtop experiments and in vivo studies. Future efforts on fully digital automation of drug delivery will pave the way for the next generation of precision medicine. | Results and discussion
Configuration and characterization of the spatiotemporal on-demand patch (SOP)
Figure 1a highlights the design of an SOP, which consists of two primary parts: i) a drug-loaded microneedle (MN) patch protected with an electrochemically triggerable metal layer (gold) as the drug delivery interface; ii) a near-field communication (NFC) module for the wireless control of triggering signals in controlling location and schedule of drug release. A flexible printed circuit board (PCB) provides interconnected traces that integrate the two parts to form a fully wireless, wearable drug delivery system. The main body of the drug delivery interface (MN arrays) uses poly(D, L-lactide-co-glycolide) (PLGA, Mw = 50 ~ 75 kDa, ester terminated, Sigma Aldrich) as the matrix materials that can undergo bulk erosion upon contact with biofluids to generate biological benign byproducts (lactic acid and glycolic acid) 53 . As shown in Fig. 1b , the fabrication relies on sputter deposition to coat a thin layer of gold (thickness 150 nm) onto the drug-loaded MN arrays supported by a PLGA base. The gold encapsulation layer is supported on the surface of solid MNs, which enables sufficient stability with a thinner thickness (150 nm) compared with those used in the reservoir designs (thickness of gold layer more than 300 nm) as reported previously (as illustrated in Supplementary Fig. 1 ). Thus, the thickness to realize effective drug encapsulation can be smaller. Ablation using a laser beam defines the gold traces that connect with separated MN domains to realize spatial control of drug release. A thin polydimethylsiloxane (PDMS) (thickness 10 μm) covers the gold layer at the circuity base regions. It exposes only the MN regions to allow physical contact of the gold layer on the MN with biofluids.
Fabrication of the SOP relies on a low-cost solution-molding procedure (Supplementary Fig. 2 ). The process starts with UV laser ablation to define an MN mold from a PDMS pad. Drop casting drug-loaded PLGA solution into the PDMS mold and setting under vacuum allow the entrance of PLGA into the negative MN molds. Sufficient solidification over 8 h allows easy extraction of the PLGA MNs. Then, the PLGA MNs undergo sputtering deposition to coat a 150 nm-thick gold layer that conformally covers the top surface, followed by patterning with a laser ablation system to define control circuits. Drop casting a thin PDMS layer (thickness 10 μm) onto gold layers protects the control circuits and exposes the MNs for drug release. The MN patch is then attached to a flexible PCB and connected with a current regulator, a wireless energy harvester, and a microcontroller to complete the fabrication process. The solution-molding method can produce SOPs with: (1) tunable MN lengths from 600 μm to 3 mm and aspect ratio from 3 to 8; (2) high dimension uniformity in both length and base diameter (Supplementary Fig. 3 ); (3) arbitrary MN array configurations (square, hexagonal, single-needle, etc.). Figure 1e – h shows that the morphology and shape of the PLGA MNs remain stable during both gold deposition and drug loading procedures.
Figure 1c , d illustrates the overall working mechanism to realize high-precision drug delivery by our SOP. The electrically triggered crevice corrosion of the gold protective layer is the switch for initiating drug release of specific collections of SOP MNs. Upon SOP deployment on the skin, the MNs stay at a standby stage with the drug fully protected by the gold layer. Once a direct current (DC) electrical trigger (2.2–3 V) is applied, the electrochemical crevice corrosion starts to occur on triggered MNs, transitioning them from standby mode to releasing mode. After a short period (<30 s when 2.5 V is applied), the gold protective layer on the MN is fully dissolved, and MNs are exposed to the bio-environment to enable drug release. The compatible integration of microcontrollers allows the electrical triggers to be manipulated at a precise time point for precision delivery. By patterning gold layers via microfabrication, the SOP can realize the spatial profile of drug release at a high spatial resolution (~1 mm 2 ). A built-in anode is integrated into the SOP to complete the circuitry for the in vivo electrochemical crevice corrosion.
Characterization of electrically triggered crevice corrosion
Figure 2 demonstrates the characterization of SOP on its active control of drug release (based on MNs, 1.2 mm in height, 150-nm gold coated). The operation of drug-release control includes three stages: (1) standby stage when the MNs are fully coated with gold (labeled as 0 min); (2) transitioning stage when the electrical trigger is activated, and the gold layer is partially dissolved (labeled as 0.5 min); (3) releasing stage when the gold is fully dissolved, and MNs are exposed to the biofluids (labeled as 2 min). 1X Dulbecco’s phosphate-buffered saline (DPBS, Corning) is used here to simulate the body biofluid. The experiment is conducted at room temperature and triggered by a 2.5-V DC. Figure 2a shows optical and SEM images of the MN arrays at different stages, which indicates a noticeable change in surface color and roughness associated with the electrically triggered crevice corrosion. The results demonstrate that the main structure of the MN stays stable during the transitioning stage, and the electrical triggers effectively dissolve gold into biofluids and sufficiently expose the core MNs. More characterizations from different perspectives appear in Supplementary Fig. 4 .
The electrochemical crevice corrosion of the gold layer can be triggered by a direct current potential within 2 min, which is of clinical relevance for a timely response in drug administrations. A quantitative amperometry study of current density under different potentials is conducted to understand the electrical triggering behavior. Figure 2e shows I-V measurements of SOP triggering with potentials bias ranging from 2.0 V to 2.8 V applied to the MN arrays (1.2 mm in length, 150-nm gold coated). A steep drop in current density appears at 15 s of application for 2.8 V bias, indicating the endpoint of the electrochemical crevice corrosion. The time of the current-drop appearance increases as the potential bias decreases. The time consumption from the beginning to the end of the current is defined as the effective corrosion time plotted against the potential in Fig. 2f . The fitting is based on the Butler-Volmer equation where is the exchange current density, is the Faraday constant, is the overpotential, is the ideal gas constant, is the thermodynamic temperature, and is a coefficient with values ranging from 0 to 1, is the number of electrons in the anodic half reaction.
This equation describes the relationship between potential difference and reaction rate 54 . At an ambient potential of 2.5 V, the crevice corrosion on MN is triggered within 30 s. This parameter is applied in the following experiments. We hypothesize that the crevice corrosion driven by constant voltage includes two parts: anodic oxidation and mechanical crevices. The 2.5-V potential between the anode and cathode is sufficient for triggering gold oxidation coupled with hydrogen emission reaction (HER) at ambient conditions (1X DPBS, 25 °C). It is reported that an anodic potential larger than 1.95 V (vs standard hydrogen electrode, SHE) is high enough to drive the below-surface oxidation of gold film 55 . This corresponds with our observations, as the crevice corrosion is triggered at a potential above 2.0 V, even though the HER on the counter electrode is not under the standard condition. The oxidated gold species in neutral and alkaline environments may include oxides, hydroxides, and free elements, mostly in the form of nanoparticles (NPs) 56 , 57 , that collectively in the amount used here are benign to the human body 58 . Supplementary Fig. 5 provides a schematic illustration of the two-electrode system of SOP (Supplementary Fig. 5a ) and the atomic scale illustration of the anodic oxidation (Supplementary Fig. 5b, c ) on Au (111) facet in a neutral environment, with reaction products, Au 2 O 3 in the form of nanoparticles. The anodic behavior is also studied by finite element analysis (FEA), including reaction potential, reaction rate, current, and potential distribution (Supplementary Fig. 6a–d ). The simulation results show that the reaction rate at 2.5 V is 281 nm/min (Fig. 2h, i ), which is consistent with the experimental observations (150 nm in ~ 30 s). The anodic polarization curve under the condition of standard potentials and primary current distribution also shows a minimal triggering potential at around 2.0 V (Supplementary Fig. 6e ). However, the anodic oxidation may not account for the entire loss of the gold coating. Based on the 2.6-V amperometry experiment (Fig. 2e ), the theoretical weight of the oxidated gold is calculated as 122 μg. The actual value should be lower because water splitting (oxygen generation), as the side reaction, may also contribute to the current density. This amount of gold NPs does not constitute a health hazard since it is much lower than the safe exposure threshold, which is reported as 5 mg/ml 58 . Meanwhile, the total weight of the diminished gold film on the MN array is calculated as 290 μg, which is significantly higher than the oxidated amount. This is because the major part of the gold layer is not oxidated but exfoliated from the surface due to crevices and cracks. These defects come from the weakening effect on gold membranes as they become thinner during corrosion. This kind of crevice on metal film is generated by an electrical current, which is reported in some previous studies 43 , 45 .
A gold-on-wafer experiment validates the presence of mechanical crevices and exfoliations. The surface roughness of the crevice corrosion of the gold layer is also studied with the gold-on-wafer ({100} facet, 150-nm, 1 cm 2 square) model (Supplementary Fig. 7b , c ). The gold layer is connected to a power source, and crevice corrosion is triggered at 2.5 V in 1X DPBS. The surface roughness of gold is calculated with FIJI ImageJ on optical images collected at various stages of corrosion. As shown in Supplementary Fig. 7d , the surface roughness is monitored every 0.5 min for 5 min from the initial stage, then every 1 min from the 5-min stage to the 9-min stage. A sharp increase in surface roughness is observed in the first 1 min right after the crevice corrosion is initiated (Supplementary Fig. 7a ). This is consistent with the amperometry study that most electrochemical corrosion happens within 1 min under this potential. The roughness increase indicates the gold layer’s exfoliation from the wafer, which is also observed in the MN corrosion from an SOP. It can be concluded that a significant part of the gold layer is not directly oxidized but exfoliated during the electrical triggers. This corresponds with the previous study using gold film as the control gate for implanted drug reservoirs 44 .
To characterize the electrical trigger process, we select two different areas on an SOP MN: the tip area and the waist area, as shown in Fig. 2d . The structural difference between the two areas is the surface curvature, with the former being 1.37 mm −2 and the latter close to 0. The waist area stands for the MN’s major surface, where most drugs are released. The energy-dispersive X-ray spectroscopy (EDXS) mapping is carried out on three stages of MNs from both areas as a semi-quantified surface analysis (SI Figs. 8 and 9 ). Oxygen, carbon, and gold are selected as elements of interest, where oxygen and carbon show the exposed polymer body while gold shows the encapsulated surface of the MN. The ratio quantification is based on the relative weight percentage of only these three elements despite the existence of other elements. As shown in Fig. 2b , c , both the tip and the waist areas show similar trends in element-relative ratios. An increase in oxygen, carbon, and a decrease in gold (all by weight%) are observed. The waist area, which stands for the main body of an MN, shows a more remarkable change in element constituents, indicating a complete corrosion of triggered crevice for the gold layer. The tip part of MN is still partially capped by a small amount of gold at the releasing stage. This part of the gold layer is isolated during the crevice corrosion process and accounts for the rather high remaining gold shown in Fig. 2b . The left gold covering the tip area is too small to hinder the overall drug release from the entire MN. Silicon and copper are also observed in EDXS mapping, which comes from PDMS residuals and conductive wires used during the experiment. More detailed information appears in the Supplementary Information on the EDXS element mapping.
Figure 2g shows the thermal characterization of SOP during the electrochemical corrosion to validate the thermal safety. The experiment uses an MN array (1.2 mm in length, 150-nm gold coated) connected to a 2.5-V DC power to undergo crevice corrosion in 1X DPBS. A FLIR thermometer records the temperature of the MN array with and without current in a 25 °C environment. The results show no noticeable change in temperature during the electrochemical corrosion, which indicates a low possibility of tissue damage from heat effects.
Figure 2j and Supplementary Fig. 10 show the uniformity of the gold layer on MN prior to crevice corrosion. We also use atomic force microscopy (AFM) to characterize the surface roughness of gold layer deposited on polymer to illustrate the uniformity of deposition thickness by sputter coating, as shown in Supplementary Fig. 11 .
Release control and electrical triggering of microneedle patch
Figure 3a demonstrates the encapsulation performance of gold coating on MNs. Here, Rhodamine B (Thermo scientific), a fluorescent dye, is loaded into the microneedle patch (0.3% in weight, ~90 ng per MN) to simulate small-molecule drugs, which can be subsequently quantified by UV-Vis spectroscopy. Figure 3a, b show the release profile of Rhodamine B from a bare MN patch (1.2 mm in length) and an MN patch (1.2 mm in length) with a 100-nm gold coating. The release study is carried out in 45 °C 1X DPBS. Significant color fading on the MN patch without gold coating, as shown in Fig. 3a , and the apparent increase in absorbance (blue, without gold coating, Fig. 3b ) indicate a successful release of dye into the biofluids. The average release rate of Rhodamine B in an hour is calculated as ~ 415 ng/min. In contrast, the controlled experiment (black, with gold coating) shows no significant change in absorbance (Fig. 3b ), suggesting excellent protection of encapsulated drugs from release. The calculation of the accumulative release amount is based on a calibration curve of Rhodamine B solutions and Beer-Lambert Law. Detailed explanation is provided in Supplementary Fig. 13 and SI Table 1 . We further analyze the stability of gold coating in a 0.5% agar model to better simulate the mechanical properties of animal tissue. The MN array (1.2 mm in length, 150-nm gold coated) remains stable during a 2-week soaking test in the agar model without significant changes in shape or surface morphology (Fig. 3a ). SI Figs. 14 and 15e , together with Fig. 3a , show the stability of gold film on MNs against soaking in biofluids and mechanical friction in tissues. This indicates that the bonding between gold and PLGA is strong enough to meet the need for implantation, which corresponds with previous research on Au-polymer adhesion 59 – 61 .
Figure 3c demonstrates the wireless design of a SOP. An external power source—in this case, a signal generator—is connected to an inductive coil to provide a high-frequency (~MHz level) alternative current (AC). The inductive coil is paired with the receiving coil on the device to achieve wireless power transfer via magnetic resonance coupling. The AC is then converted to direct current using a full-bridge rectifier. A 2.5-V regulator then regulates the DC to provide a stable potential that facilitates the electrochemical crevice corrosion of the gold layer on MNs. The configuration of the wireless SOP is illustrated in Fig. 3d . The wireless SOP consists of an energy harvesting module for wireless power, a power amplification module, a System-on-Chip (SoC) module for remote control, and an MN array coupled with counter electrodes (CE). Within the energy harvesting module, a receiver coil, a full-bridge rectifier, and a regulator are used to provide stable direct current (DC) output. The output is coupled with a 0.1-μF capacitor to constitute a low-pass filter, improving output stability. An equivalent circuit diagram is provided in Fig. 3g , which emphasizes the energy harvesting module while simplifying the power amplification and SoC. A complete circuit diagram can be found in Supplementary Fig. 16 . Supplementary Fig. 17 shows the performance and impedance analyses of the energy harvesting module. The optimal signal input is determined as around 15 V peak-to-peak at 15 MHz. The measurements shown in Fig. 3e validate the output stability of wireless SOP with the regulator. The final output power signal is stabilized at around 2.5 V with a standard deviation of ~0.1 V, ensuring precise crevice corrosion control for initiating drug release. The input signal’s optimal frequency is around 36 MHz, as shown in Fig. 3f .
Temporal and spatial control of SOP triggering
To further investigate the temporal and spatial controllability of electrical triggering on the MN patch (Fig. 4a ), we design a multi-domain SOP to realize the stepwise, on-demand release. The multi-domain SOP consists of 7 domains of MN arrays (1.2-mm in length, 150-nm gold coated). The gold layer on the PLGA patch is patterned by laser ablation to enable separate triggering of individual MN domains. A layer of PDMS (~10 μm) is then applied onto the patch except for the hexagonal MN regions to protect gold interconnects from dissolving during electrical triggering. Figure 4c shows the electrical triggering schedule for four of the seven MN domains for the SOP loaded with Rhodamine B (0.3% by weight) during its immersion in 65 °C 1X DPBS as an accelerated study. The electrical triggers use 2.5-V DC bias for 30 s at every 30-min interval of immersion. Following each interval, immediate sampling of the environmental fluids allows quantitative estimation of drug-release dosage using UV-Vis spectroscopy, as shown in Fig. 4b . The measurements show a multi-step increase in spectral absorbance, indicating the stepwise increase of drug dosage and confirming the on-demand drug release at the desired time (0, 30, 60, 90 min, Fig. 4b ). Figure 4d demonstrates the staged release of MN domains by selectively dissolving the gold encapsulation layer with an electrical trigger (Supplementary Movie 1 ). The red dash line circles the specific MN array triggered from each stage. The results confirm that our SOP realizes both temporal and spatial control of drug release using digital electrical triggers.
Supplementary Fig. 18 demonstrates the capability of ultrafine spatial control of drug release. Here, we design a miniaturized SOP with a single domain consisting of 8 MNs (1.1 mm in length, coated with 150 nm-thick gold) with 1–3 mm in spatial separation. With a specific design of gold circuits, each MN in the domain can be triggered individually (as illustrated in Supplementary Fig. 18a ). Supplementary Fig. 18b shows that each MN follows sequential release via electrical triggering (2.2-V DC) within 15 s without interfering with adjacent MNs. Patterning techniques on the gold layer primarily dictate the spatial resolution of release control for SOP.
Demonstration of SOP in intracranial drug delivery
Beyond the clinical applicability for transdermal drug delivery, the SOP could show impactful utility in facilitating animal behavior studies. Here, we demonstrate intracranial delivery of melatonin using SOP for an animal sleep study. Melatonin, a hormone naturally produced in the brain by the pineal gland, plays a crucial role in regulating the sleep-wake cycle while also participating in other regulations like blood pressure and body temperature 62 – 64 . However, people found many strains of nocturnal laboratory mice are deficient in the release of melatonin, including C57/B6 mice 65 , 66 , and the effects of melatonin are still controversial 67 – 69 . Here, we use C57/B6 mice and implant the SOP to test how exogenous spatiotemporally controlled release of melatonin in the deep locations of the cortex, which is mainly distributed by melatonin receptor 1 70 , would affect the sleep-wake cycle. The high spatiotemporal controllability of melatonin release offered by the SOP may open up new opportunities to understand regional brain responses to melatonin and study the pathology of sleep-wake disorders. Loading melatonin into MNs of the SOP follows the solution fabrication method described in Supplementary Fig. 2 . Melatonin that dissolves in acetone can be mixed with the precursor PLGA solution, which ensures the loading dosage (10–35 wt%) for each MN. The 3-mm MNs are fabricated with the PLGA-melatonin ratio from 10:1 to 2:1 (Fig. 5a ). In addition, the drug can be concentrated around the tip of the MN, which ensures deep-brain delivery for mice, as shown by the dash-red line in Fig. 5a . Based on the mixing ratio, the payload of melatonin per MN is estimated to be 22.2 to 81.7 μg, comparable to recommended dosages for mice (4–20 mg/kg) 71 , 72 . The drug payload is modulated by loading different PLGA solutions during the mold casting procedure. Furthermore, the mechanical property of the 3 mm-tall MNs used for intracranial delivery is characterized by a fracture test (Supplementary Fig. 15 ). The ultimate strength of PLGA MN (1.2 mm in length) is determined by the first fracture point in the force-displacement graph, as labeled by the red frame (Fig. 5b ). The fracture point, recognized by a sudden drop in measured force, corresponds to the initial fracture of the MNs, followed by multiple subsequent fractures appearing in different locations of the MNs (Supplementary Fig. 15b , d ). The maximum mechanical strength is derived to be 118 MPa, which is sufficiently rigid for human skin penetration 73 . The calculation considers the pressure measured at the first fracture point, while the contact area of the MN is approximated based on the tip diameter (around 30 μm, Supplementary Fig. 3f ). Another concern is the stability of the gold layer during implantation. Supplementary Fig. 15e shows the PLGA MN (150 nm gold coated, 1.2 mm) before and after the penetration test. No significant changes can be observed based on the comparison of optical images. Figure 3a also proves the stability of the gold layer during the soaking test. In addition, we use the MN array (150 nm gold coated, 1.2 mm) to penetrate the tissue of the chicken thigh for multiple cycles, as presented in Supplementary Fig. 14 . The gold layer remains stable after 20 cycles of penetration.
Deployment of the melatonin-loaded SOP in live animal models as they move in a caged environment shows possibilities for actively controlling melatonin release to the deep-brain regions of the parietal lobe (Fig. 5g ). To ensure device stability, the SOP is coupled with a custom headstage that can be firmly mounted onto mice heads (Supplementary Fig. 19 ). Figure 5h shows an immunohistochemistry analysis of brain tissues from the mice at various recovery stages following SOP implantation. On day 1 of post-implantation, the brain tissues in close proximity to the SOP MN show structural damage resulting from mechanical forces during intracranial surgery, which is typical for general brain implantation 74 . As the mice recover from the implantation, the levels of GFAP (red) and IBA (white) show a significant decrease in concentration and staining range, indicating excellent biocompatibility of SOP. The tissue in contact with the MN becomes smoother, with fewer rough edges. An obvious increase in neuron regeneration (neurotrace, green) can be observed, indicating good recovery from implantation surgery.
Supplementary Fig. 20 shows an in vivo study to validate the safety and biocompatibility of SOP in intracranial deployment. Here, the study compares four groups of MNs (bare MN (MN), gold-coated MN (Au-MN), melatonin-loaded MN (Mel-MN), and melatonin-loaded MN with gold coating (Au-Mel-MN)) in their effects on the inflammation and degree of neural damage after one-month implantation. Supplementary Fig. 21a shows typical images of MN implanting sites with GFAP/IBA/NSE/DAPI multiple staining. As a result, the number of astrocytes and microglia in the region of interest (ROI) is significantly lower in the Au-MN or Au-Mel-MN group compared to the bare MN group, which indicates that gold-coated MNs have less damage to neural tissues and cause less inflammation compared to bare PLGA MNs. In addition, the fluorescence density of the markers has no difference between bare and gold-coated MNs, which indicates that the Au encapsulation on MNs does not affect the expressing strength of the GFAP or IBA signals (Supplementary Fig. 21c ).
Furthermore, we used a fiber photometry system to investigate the neural activity in the medial prefrontal cortex (mPFC) after the implantation of MNs (bare MN, Au-MN, control (no MN), respectively, Supplementary Fig. 22 ). As shown in Supplementary Fig. 22c , there is no observable difference in Ca 2+ dynamics of mPFC excitatory neurons between the control group, bare MN, and Au-MN groups, which indicates the MNs have no significant damage to the brain tissue and introduce little influence on the neuronal activity.
Besides, we study the bioresorbability of the MN. Though the degradation of PLGA is much slower than gold 75 , we show this process of PLGA in an accelerated degradation experiment in 65 °C phosphate-buffered saline (PBS), as illustrated in Supplementary Fig. 23 .
We further use the in vivo animal models to characterize the SOP functional performance. Here, an additional two recording electrodes are inserted adjacent to the location of SOP implantation, as shown in Fig. 5c . First, a set of DC electrical triggers (5 s duration, 2.5 V) is delivered and recorded (5 mm from the stimulation electrode), as shown in Fig. 5e . The crevice corrosion of the gold layer (thickness 150 nm) on the MN can be completed by applying 5–7 times the 5-s triggers to allow melatonin release. Then, a series of short pulse stimulation experiments are examined (Supplementary Fig. 24 ). The pulse signals vary from 10 mV to 50 mV in amplitude and 1 Hz (duration 10 ms) or 10 Hz (duration 1 ms) in frequency. The signal amplitude-recording distance relationship is also studied based on 10-ms pulses of 50 mV (Fig. 5f ) and proved to affect a 5-mm area mainly. No significant changes are observed on the gold layer of MN after short pulses, as demonstrated in Fig. 5d , indicating electrical signals generated from neuron cells induce negligible damage to the gold protection layer of the SOP. Furthermore, the results demonstrate the gold-coated MNs can also be used as stimulation electrodes for the delivery of low-amplitude (10–50 mV), pulsatile signals (Fig. 5f ), which may serve as a strategy for neuronal regeneration 76 , 77 .
Assessment of in-brain drug delivery by microneedles
To investigate the gold encapsulation’s performance in controlled drug release from the MN, we fabricated two groups of melatonin-loaded (25 wt%) MN with the exact specification shown in Fig. 5a . For effective comparison, the first batch of MNs (Mel-MNs) has no gold encapsulation layer, while the second batch of MNs (Au-Mel-MNs) is coated with 200 nm of gold on the MN surface.
First, we conduct the in vivo drug release experiment with the Mel-MNs and the blank control (MNs without loading melatonin) by stereotaxically implanting the MN into the medial prefrontal cortex (mPFC) of mice (one MN in each, Fig. 6a ). After one week of recovery, we use the Pinnacle sleep system (Sirenia Acquisition) to record EEG and EMG analyses. From 19:00 (active period) to 22:00, the power density of the Mel-MN group is compared with those of the control group (Fig. 6b ). The total amount of NREM sleep increased by 42.0%, and the amount of wakefulness decreased by 27.9% over the 3-h period compared to these parameters in the control group (Fig. 6c, d ). The sleep status is calculated from the EEG/EMG traces and the corresponding hypnograms (Supplementary Figs. 25 – 29 ). The experiment confirms the effective release of Melatonin from Mel-MNs, which can successfully modulate the sleep behavior of mice over a week.
Second, we conduct similar in vivo experiments to investigate the feasibility of release control with gold encapsulation from the Au-Mel-MN. Here, we apply the Au-Mel-MNs by stereotaxically implanting them into the medial prefrontal cortex (mPFC) of mice (one MN in each, Fig. 6e ). For effective comparison, the EEG and EMG recording starts one week after implantation, following the same way as described above. Figure 6f shows the power density of the Au-Mel-MN group and control group recorded from 19:00 to 01:00 am (active period), indicating that the delta power density of the Au-Mel-MN group was significantly higher ( P < 0.01) than that of the control group. The significantly enhanced delta band EEG power indicates a deeper sleeping degree induced by melatonin, compared with the Mel-MNs experiment (Fig. 6c, d ). The total amount of NREM sleep increased significantly by 36.7%, REM sleep increased by 57.8%, and that of wakefulness significantly decreased by 26.0% over the 6-h period, compared to these parameters in the control group (Fig. 6g, h ). Essentially, melatonin release in the mPFC induces NREM sleep amount increases in active periods and enhances delta power density during NREM sleep even more significantly than that shown in the Mel-MN group. The delayed release effect mainly explains this due to the existence of gold encapsulation. As a result, the releasing peak of melatonin appeared after the controlled degradation of the gold protection layer. In contrast, the Mel-MN group releases melatonin right after implantation, leading to a weakened effect at the time point of recording (one week post to recovery). Collectively, these in vivo experiments further demonstrate the programmed drug release performance of SOPs in freely moving animals and suggest its potential utility in animal behavior studies. We revealed that programmed drug release of exogenous melatonin in the mPFC would improve NREM sleep, REM sleep, and delta power density, which may represent a novel target for treating sleep disorders.
In this study, we present an on-demand drug-delivery patch that can be digitally controlled to enable delivery precision in both space and time. The spatiotemporal on-demand patch (SOP) uses bioresorbable microneedles with high-aspect ratios (3–8) as the drug-loading vehicle and a thin layer of gold (thickness 150 nm) as a release gate that can be digitally controlled with a small electrical trigger (2.5 V for 30 s). This design allows fully active control of drug release at a single-microneedle level with spatial resolution, demonstrated here, less than 1 mm 2 , highlighting the enabling capabilities over most existing drug-delivery devices. This spatial resolution allows more than 20 doses of the drug to be housed within a thin, wearable patch of 1 cm 2 , ensuring comfortable and convenient user adherence for repetitive pharmaceutical treatment over extended periods. The fabrication is compatible with the microfabrication process, which could further decrease the spatial resolution to micrometer scales. The on-demand, rapid-response drug release can be enabled within 30 s following an active electrical trigger.
The fabrication procedure of our SOP uses a simple solution-molding method, offering a low-cost approach with high dimensional customizability. Using laser ablation, microneedles, ranging from 0.6–3 mm, can be manufactured efficiently and in high quality to fit a wide range of drugs (melatonin with controllable payloads 78 . Moreover, the solution-based mold process allows drug-load PLGA MNs to be produced at scale with similarly high quality. The process allows convenient compatibility for integrating electronic modules to enable digital automation in drug delivery.
The multifunctionality of drug delivery and stimulation therapy could synergistically create advanced therapy as potential future avenues of neuroscience research. The SOP presented in this work exhibits continuous sustained drug release feature that meets the need to treat various chronic neural diseases apart from sleep disorder, especially Alzheimer’s disease (AD) 50 . As a multiphase neural disease that affects most areas of the brain 51 , 52 , the treatment of AD requires different drugs at different stages. For example, acetylcholinesterase inhibitors (AChEIs) are currently the mainstay of treatment for mild-to-moderate AD. In contrast, memantine is approved for mild-to-moderate dementia as a non-competitive N-methyl-D-aspartate (NMDA) receptor antagonist that prevents the overactivation of neuronal NMDA receptors. At the same time, the combination of these two drugs also significantly improves cognitive function in moderate-to-severe AD 79 . The SOP can enable precise joint delivery of multiple drugs in desired brain regions to achieve enhanced treatment. Combined with a burst release design (like hollow microneedles), the SOP also fits for rapid drug delivery as a timely response towards acute neural diseases, such as cataplexy and epilepsy 80 – 82 . The remotely powered SOP allows for active drug delivery at a particular time and space coupled with real-time behavioral study, such as sleep-quality characterization, which provides a convenient method for drug-performance analysis. The high-resolution and controllability features of SOP make it suitable for such kind of regional and dosage-sensitive treatment. In addition, many drugs for brain disorders may have detrimental effects on other organs if administered systemically 83 . The regional drug release feature of SOP enables targeted drug delivery that minimizes systemic exposure, reducing the risk of unwanted side effects in non-brain tissues.
Moreover, our SOP is capable of wireless operation via near-field communication or, potentially, Bluetooth Low Energy. The agar-soaking test, fracture test, and in vivo intracranial delivery experiments validate the SOP’s practical functionality and biocompatibility. Furthermore, gold-coated MNs can extend beyond drug-releasing control, as the SOP can offer in vivo electrical stimulation. These concepts establish unique approaches in high-precision drug-delivery technologies with additional utilities in advancing fundamental studies of disease pathology (such as cancer metastasis) and neuroscience research, as demonstrated by both benchtop experiments and in vivo studies. Future efforts on fully digital automation of drug delivery will pave the way for the next generation of precision medicine. | Transdermal drug delivery is of vital importance for medical treatments. However, user adherence to long-term repetitive drug delivery poses a grand challenge. Furthermore, the dynamic and unpredictable disease progression demands a pharmaceutical treatment that can be actively controlled in real-time to ensure medical precision and personalization. Here, we report a spatiotemporal on-demand patch (SOP) that integrates drug-loaded microneedles with biocompatible metallic membranes to enable electrically triggered active control of drug release. Precise control of drug release to targeted locations (<1 mm 2 ), rapid drug release response to electrical triggers (<30 s), and multi-modal operation involving both drug release and electrical stimulation highlight the novelty. Solution-based fabrication ensures high customizability and scalability to tailor the SOP for various pharmaceutical needs. The wireless-powered and digital-controlled SOP demonstrates great promise in achieving full automation of drug delivery, improving user adherence while ensuring medical precision. Based on these characteristics, we utilized SOPs in sleep studies. We revealed that programmed release of exogenous melatonin from SOPs improve sleep of mice, indicating potential values for basic research and clinical treatments.
Microneedle patches that can actively address individual needles are challenging to realize. Here, the authors introduce a spatiotemporal on-demand patch for precise and personalized drug delivery, utilizing electrically triggered control with drug-loaded microneedles and biocompatible metallic membranes.
Subject terms | Supplementary information
Source data
| Supplementary information
The online version contains supplementary material available at 10.1038/s41467-023-44532-0.
Acknowledgements
This work was supported by the start-up funds from University of North Carolina at Chapel Hill and the fund from National Science Foundation under award # ECCS-2139659 (received by W.B.). Research reported in this publication was also supported by the National Institute of Biomedical Imaging and Bioengineering at the National Institutes of Health under award # 1R01EB034332-01 (received by W.B.). This work was performed in part at the Chapel Hill Analytical and Nanofabrication Laboratory, CHANL, a member of the North Carolina Research Triangle Nanotechnology Network, RTNN, which is supported by the National Science Foundation, Grant ECCS-2025064, as part of the National Nanotechnology Coordinated Infrastructure, NNCI.
Author contributions
W.B. conceived and directed the project. Y.W. and Z.C. performed experiments and prepared supplementary information. B.D., W.L., S.X., L.Z., T.W., P.H., Z.Y., and W.X. helped fabricate the device and collect the data. Z.C., Z.H., and J.S. performed the animal experiments. Y.W., Z.C., J.S., Z.H., and W.B. wrote the paper. All authors discussed the results and commented on the manuscript.
Peer review
Peer review information
Nature Communications thanks Canyang Zhang, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. A peer review file is available.
Data availability
The data of this study are available within the article, the Supplementary Information. Source data are provided with this paper and available upon request. The data generated in this study are provided in the Supplementary Information and Source Data files. Source data are provided with this paper.
Competing interests
The University of North Carolina at Chapel Hill (no. 63/343,888; filed 18 May 2023) filed a provisional patent application surrounding this work. The method to digitally automate the control of drug delivery is included in the patent. Wubin Bai, Yihang Wang, “Wearable Apparatus for Deep Tissue Sensing and Digital Automation of Drug Delivery”, provisional patent, No. 63/343,888, filed on May 19, 2022. The remaining authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:58 | Nat Commun. 2024 Jan 13; 15:511 | oa_package/eb/ad/PMC10787768.tar.gz |
|
PMC10787769 | 38218940 | Materials and methods
Chemicals
All solvents for synthesis were purchased from VWR and used as received. Water (18.2 MΩ∙cm) was collected from an Arium (Sartorius) laboratory-grade water purification system. Gadolinium(III) nitrate hexahydrate (Gd(NO 3 ) 3 ·6H 2 O), sodium tungstate (Na 2 WO 4 ) and potassium chloride (KCl) were purchased from VWR. Hexadecyl-trimethylammonium bromide (CTAB, 99 + %) was purchased from ACROS ORGANICS. 2-methoxy(polyethyleneoxy)propyltrimethoxysilane (90%, 6–9 PEG units) was purchased from ABCR Chemie. Triethylamine (TEA, > 99.5%) and hydrochloric acid (HCl, 32 wt. % in H 2 O) were purchased from Merck. Sulfo-cyanine3 (Cy3) NHS ester (95%) was purchased from Lumiprobe. For substrate preparation and printing, Toluene, chloroform, ethanol, dimethyl sulfoxide (DMSO), and glycerol were purchased from Sigma-Aldrich (Germany). Tamra amine, 5-isomer (NH 2 -Tamra) was bought from Lumiprobe GmbH (Germany). Thiol-PEG-Biotin, MW 2000 (SH-Biotin) and Fluorescein-PEG-Thiol (SH-FITC) were obtained from Nanocs Company (USA). Biotin-dPEG 7 -NH 2 (NH 2 -biotin), streptavidin Cyanine-3 (SA-Cy3), 3-methacryloxypropyltrimethoxysilane (MPS-silane), bovine serum albumin (BSA), and phosphate-buffered saline (PBS) were obtained from Sigma-Aldrich (Germany). A 10X PBS solution contains NaCl: 1.37 M, KCl: 27 mM, Na 2 HPO4: 100 mM, KH 2 PO 4 : 18 mM.
POM synthesis
All reactants and solvents were of commercially available grade and used without any further purification and all reactions were carried out under aerobic conditions. Synthesis of K 12.5 Na 1.5 [NaP 5 W 30 O 110 ]·15H 2 O abbreviated as {NaP 5 W 30 }-POM: This POM ligand was synthesized according to published procedure 12 . Initially, 33 g (100.1 mmol) of sodium tungstate was dissolved in 30 mL of water in a beaker. Subsequently, 26.5 mL of 85% phosphoric acid was added with stirring. Once a clear solution was obtained, it was transferred into autoclaves and prepared for solvothermal synthesis. The autoclaves, containing the reaction solution, were then heated in the oven at 120 °C overnight. After cooling, 15 mL of water were added, and 10 g of KCl were introduced to the solution. A white-yellow precipitate formed, which was filtered off and washed with a 2 M potassium acetate solution and methanol. The resulting white-yellow solid was subjected to two rounds of recrystallization in water to eliminate the {P 2 W 18 }-POM side-product. K 12 [GdP 5 W 30 O 110 ]·15H 2 O (abbreviated to {GdP 5 W 30 } in the main text) was prepared following a previously described method with slight modification 12 . Briefly, 1 g (0,12 mmol) of the {NaP 5 W 30 }-POM ligand was dissolved in 12 mL H 2 O and heated to 65 °C while stirring. In the process, the POM dissolved completely. A solution of 3 mL H 2 O and 2 equation (0,24 mmol) Gd(NO 3 ) 3 ·6H 2 O was prepared. This solution was added in small portions to the POM solution with stirring. A suspension formed which was placed into the autoclave. This was heated in the oven at 180°C overnight. After the solution had cooled to room temperature, the product was isolated by the addition of 4 g of solid KCl. The precipitate formed was isolated from the mother liquor by filtration and dried in air. On drying, the product was characterized by IR spectroscopy (Fig. S7 ).
Preparation of MSPs with amine-groups functionalized pore walls (MSPs + )
Pristine MSPs were prepared according to a previously described procedure 51 . Briefly, to a stirring dispersion of CTAB (100 mg, 0.27 mmol) in an H 2 O/EtOH mixture (60/30 v/v, 90 mL), containing NH 3 (28 wt%, 710 μL), TEOS (250 μL, 1.13 mmol) was added. The resulting reaction mixture was stirred for 16 h at room temperature. Subsequently, 1/3 of the particle dispersion was taken and used for further functionalization steps. To this dispersion of MSPs (30 mL), 2-methoxy(polyethyleneoxy)propyltrimethoxysilane (183.0 μL; 0.38 mmol) was added and the reaction mixture was stirred for 4 h at 60 °C. Subsequently, the particles were recovered by centrifugation (20 min, 3500 rpm) and washed with EtOH (20 mL, 2x) by cycle sonication and subsequent centrifugation.
The surfactant was removed from the pores of the MSPs by extraction in EtOH. For this purpose, the particles were dispersed in EtOH (50 mL), diluted HCl (50 μL; 250 μL HCl diluted in 2 mL EtOH) was added, and the dispersion was refluxed for 12 h. The particles were collected by centrifugation and washed by sonication and centrifugation in EtOH (50 mL, 2×). The collected precipitate was refluxed again in EtOH as described above, but without HCl and for a time of 4 h. Subsequently, the particles were washed with EtOH (50 mL, 2×).
To functionalize the surfactant-free pore walls of the particles, the surfactant-extracted MSP were dispersed in EtOH (10 mL) and (3-aminopropyl)trimethoxysilane (66.0 μL, 0.38 mmol) and triethylamine (66 μL, 0.47 mmol) were added. The resulting reaction mixture was stirred at room temperature for 12 h. The particles were then collected by centrifugation and washed by sonication and centrifugation with EtOH (20 mL, 2×) and finally air dried.
POM loading of MSPs + (MSPs-POM)
To determine the optimal loading process in which no POM crystals are visible on the surface of the particles or next to the nanoparticles, we developed an optimized loading and washing process. To a dispersion of MSPs + (10 mg) in water (1 mL) POM (40 mg) was added and the resulting mixture was briefly sonicated and stirred at room temperature for 12h. The particles were then collected by centrifugation and subsequently washed with EtOH (1×) and finally with H 2 O (5×).
MSPs-POM stability tests
In a glass vial (2 mL) with a magnetic stirrer, MSPs-POM was dispersed in 10X PBS buffer (pH = 7.5) by brief sonication (c particles = 0.01 mg·mL −1 ). The resulting particle dispersion was sealed and stirred at 37 °C for 5 days. Over time, aliquots were taken and used to measure the DLS and Z-potential of the sample.
Dynamic light scattering (DLS)
The measurements were performed using a Malvern ZetaSizer Nano instrument. The intensity of the scattered light was measured at a fixed angle (173°). The wavelength of the laser light used for the light scattering experiments was 633 nm. Data analysis was performed according to standard procedures using the Malvern software (v3.30, https://www.malvernpanalytical.com/ ). Briefly, the decay rates were determined by the following relationship, where η is the viscosity of the medium, D h is the hydrodynamic diameter, and θ is the scattering angle.
The method of cumulants was used to fit the autocorrelation function, which in turn allows the determination of the diffusion coefficient (d), from which the hydrodynamic diameter (D h ) of the aggregates is calculated using the Stokes–Einstein equation (see below), where k b is the Boltzmann constant, T is the temperature, and η is the viscosity of the medium.
Z-potential measurements
The measurements were performed using the Malvern ZetaSizer Nano instrument. A viscosity of μ = 0.8984 cP and a dielectric constant of ε = 79 and a refractive index of 1.33304 were used. Analysis of data was performed utilizing Zetasizer Software 6.12 (Malvern Instruments GmbH, Germany) based on the model of Smoluchowski.
Surface amino functionalized and fluorescent MSPs-POM (H 2 N-MSPs-POM)
First, the Sulfo-Cy3 alkoxysilane used for fluorescent labeling of the particles is prepared. For this purpose, APTMS (8 μmol, from a 100 μM stock solution in dry DMSO) is added to a solution of Sulfo-Cy3-NHS ester (0.6 mg, 0.8 μmol) in dry DMSO (50 μL) in a glass vial, mixed briefly, and allowed to react for 30 min at room temperature. A dispersion of the MSPs-POM (10 mg) in toluene (500 μL) was also prepared. To this particle dispersion, the crude SulfoCy3-silane mixture and APTMS (10 μL) were added, and the reaction mixture was stirred for 12 h at 60 °C. The particles were then collected by centrifugation and washed with EtOH until no sulfo-Cy3 was detected in the supernatant of the wash solutions. The particles were then dried under reduced pressure.
Fourier-transform infrared (FT-IR) spectroscopy
FT-IR spectroscopy was employed to analyze the acquired compounds using the ATR module of a NicoletTM iS50 spectrometer, and the resulting data was visualized using Origin 2018b.
Preparation of substrates
Glass coverslips (diameter 15 mm, VWR, Germany) were cleaned by sonication in chloroform, ethanol, and water for 5 min each and then dried with nitrogen. Straight after, 5 min of plasma treatment (10 sccm O 2 , 0.2 mbar, and 100 W, ATTO plasma system, Diener electronics, Germany) was done under oxygen. Subsequently, the freshly hydroxylated substrates were immersed in a freshly prepared MPS-silane solution in toluene (1%, v/v) overnight at room temperature, and then the substrates were washed with toluene, ethanol, and water, and then dried under a nitrogen stream. Finally, the MPS-modified glass samples were stored in a desiccator.
Ink solutions preparation
Ink solutions for click reaction were prepared by mixing thiol-containing or amine-containing compounds in a mixture of DMSO/TEA (10:3, v/v). To avoid fast evaporation of the ink solvent, an amount of 30% (v/v) of glycerol was added to the ink solutions. The final concentration of the ink solutions was 2 mg/mL.
Microarray patterning
For μCS, the patterns were created on an NLP 2000 instrument (Nanoink, USA), which was equipped with a microchannel cantilever (SPT-S-C30S, Bioforce, Nanosciences, USA). Before loading inks (0.2 μL), the tips were plasma cleaned by oxygen (0.2 mbar, 100 W, 20 sccm O 2 , 2 min) to promote ink transfer, and then the inks were pushed into the microchannel by blowing with a nitrogen stream. All patterning processes (10 × 10 spot array with a pitch of 50 μm) were done at a humidity of 40% and dwell time of 0.5 s. After printing, the samples were heated at 37°C for 30 min before being allowed to rest overnight at room temperature (RT) to complete the click reaction, and then washed with water to remove the excess ink. The biotin-bearing microarrays for the comparison of binding efficiency of the thiol-ene and amine-ene coupling routes were incubated as follows. First, the samples were blocked against unspecific protein binding by incubation with 10% BSA in PBS for 30 min. Then washed by pipetting on and off 30 μL of PBS three times and subsequently incubated with 100 μL of 1 mg mL –1 SA-Cy3 in PBS (1:100) at 37 °C for 30 min in a dark environment. Finally, samples were rinsed 3 times with PBS and blown dry under a stream of nitrogen before evaluation in fluorescence microscopy. For capillary spotting, a custom-made setup was used as previously reported for liquid metal deposition 66 , but without connection of a microfluidic pump. In the present study, the capillary tips were simply immersed in a reservoir of the MSPs-POM solution, enabling the loading of ink via capillary forces. Microarrays with MSPs-POM (5 × 5 spot array with 300 μm pitch) were obtained with a glass capillary tip of approx. 100 μm aperture in an NLP 2000 system (Nanoink, USA) under 40% relative humidity with a dwell time for each spot of 1s. The samples were then incubated at 37°C for 3 h and left at RT overnight to complete the click reaction. Finally, excess ink was removed by washing with water.
Physico-chemical characterization of substrates
The static WCA was measured on an OCA-20 contact angle analyzer (DataPhysics Instruments GmbH, Germany) at room temperature. Concisely, a 3 μL water drop was dispensed on a sample surface, and the measurements were repeated three times for each sample to obtain the means and standard deviations. The roughness and topography of the sample surface was evaluated on an AFM (Dimension Icon, Bruker, Germany), at room temperature in the air in tapping mode (40 N m -1 , 325 kHz, HQ:NSC15/Al BS, MicroMasch, Germany). Three random positions were scanned for each sample (in 5 × 5 μm), and the original roughness Ra was extracted by the onboard software of the instrument. Apart from this, the analysis of chemical compositions of the surface in each step was identified by X-ray photoelectron spectroscopy (XPS) using a Thermo Scientific K-Alpha system (XPS, Thermo Fisher Scientific, East Grinstead, UK) with a base pressure of about 2 × 10 –9 mbar. Excitation was done using monochromatic Al-Ka-X-rays. The energy calibration of the system was done according to ISO 15472:2001 using copper, silver and gold reference samples. The transmission function was determined using the build-in thermo standard method on a silver reference sample. Quantification of the measurement results was done using modified scofield sensitivity factors. A 400 μm X-ray spot was used for the analysis. On non-conducting samples, a flood gun was used for compensation of charging. XPS data was processed using the CasaXPS software 67 (suite version 2.3.25). The energy scale of the spectra was set to 285 eV based on the C–C/C–H part of the C1s signal. A Shirley background was used for the evaluation of the high-resolution spectra.
Optical imaging
The optical images were captured on a Nikon Eclipse 80i upright fluorescence microscope (Nikon, Germany) equipped with an Intensilight illumination (Nikon, Germany), a Nikon DS Qi2 camera, and FITC and Tamra filters (Nikon Y-2E/C). The microscope collected the fluorescence intensity data with the built-in NIS-element software (Nikon, Germany).
Electron microscopy
The SEM images were acquired with a Zeiss Ultra-Plus SEM at 3 kV and, 5kV. The EDX measurements were performed with a Zeiss Leo 1530 SEM operating at 20 kV. The EDX data were acquired with Oxford Instruments AZtec software using an Oxford X-Max N 50 detector.
Statistical analysis
All data shown in this work were described as means ± standard deviations. The values of fluorescence intensity were obtained by an on-board software (NIS Elements AR 5.02.01, Nikon) of the microscope. The original data of WCA and roughness were obtained by measuring 3 random points of each sample, and the means and standard deviations were computed on excel by STDEVA formula. | Results and discussion
Immobilization strategy
To imprint POMs on surfaces (Fig. 2 ), we aimed to use MSPs whose pore wall is functionalized with amino groups (MSPs + ), that are protonated and thus positively charged in an aqueous environment. Due to the size of the mesopores (≈ 3 nm) and the polyanionic nature of the POM, these cargo molecules will be retained in the positively charged pores 48 . Immobilization of the POM-loaded MSP (MSPs-POM) will then be enabled by functionalization of the particle surface with reactive amino groups, which can engage in amine-ene Michael additions to alkene-functionalized surfaces. The advantage of MSPs, therefore, is the possibility to circumvent the use of monofunctionalized POMs 49 , 50 , whose synthesis requires considerable effort, while providing a generalizable strategy for a wide range of POMs to be immobilized on solid substrates.
Synthesis of POM-loaded MSPs
To prepare POM-loaded MSPs, first MSPs functionalized with amino groups on the pore wall surfaces (MSPs + ) were prepared. To this end, pristine spherical nanometer-sized MSPs were synthesized following a previously reported modified Stöber synthesis protocol using the surfactant cetyltrimethylammonium bromide (CTAB) as the structure-directing agent 51 . This synthesis method was chosen because the resulting particles are in the nanometer range, have a spherical shape, have a high specific surface area (1243 m 2 g -1 ), and are mesoporous (pore diameter ≈ 3 nm)—all properties required for subsequent effective loading of POM. Scanning electron microscopy (SEM) analysis revealed that the pristine MSPs were spherical with an average diameter of (283 ± 25) nm (Fig. S1 ), which is in good agreement with previously published results.
The pristine MSPs were then passivated on the outer surfaces with a short polyethylene glycol chain (6–9 units) alkoxysilane 52 and then CTAB was extracted from the mesopores of the particles. Next, the CTAB-free pore walls were covalently grafted with amino groups using (3-aminopropyl)trimethoxysilane (APTMS), resulting in MSPs whose pore walls are functionalized with amino groups (MSPs + ). The surface passivation and amino functionalization (Fig. S2 A) were investigated by zeta potential (Z-pot) analysis (Fig. S2 B), which revealed a nearly neutral surface charge (0.4 ± 1.2) mV of MSPs + . This reduced negative Z-pot value (for non-functionalized and surfactant-extracted MSP, the Z-pot is around − 30 to − 40 mV) can be explained by the surface passivation that took place, which reduces the amount of deprotonated and negatively charged hydroxyl groups on the surface of the particles, as well as by charge balancing due to the positive R-NH 3 + groups present on the pore entrances.
Next, we optimized the procedure for loading MSPs + with POM (MSPs-POM) to eliminate the presence of non-loaded POM crystals. In suboptimal loading procedures, where not all POM was loaded within the pores of MSPs + , POM crystals were observed near or on the surface of MSPs, as demonstrated by SEM analysis (Fig. S3 A). By optimizing the mass ratio between POM and MSPs + used for the loading and adding several washing steps with EtOH and water, we were able to obtain MSPs-POM (Figs. 3 A and S3 B), which showed no crystalline POM precipitates (MSPs-POM). Z-pot measurements on MSPs-POM indicated more negative values compared to MSPs + particles, suggesting successful adsorption of POMs within the pores of the particles (Fig. S2 B). Attenuated total reflectance Fourier infrared spectroscopy (ATR-FTIR) recorded on MSPs-POM (Fig. 3 B) shows the characteristic POM-related transmission bands supporting the presence of POM in the mesopores of the silica nanoparticles (gray background). The FTIR spectrum of MSPs-POM exhibits vibrational bands within the 1200–400 cm -1 region, which closely resemble the characteristic bands found in the pristine POM. POMs feature unique metal–oxygen vibrational modes within the fingerprint region. The distinctive peaks at 1134 and 1054 cm -1 are attributed to the P–O vibrations of {GdP 5 W 30 } POM. The peak at 907 cm -1 could be assigned to terminal ν as (W══O t ) vibration. The features around 709 cm -1 could be attributed to the edge-sharing ν as (W–O–W). All these bands are considered as pure vibrations of the POM skeleton. The peak at 1629 cm -1 can be attributed to the δ(O–H) of the lattice water molecules. The broad transmission band at 3441 cm -1 can be attributed to the presence of -OH groups on the outer surface of MSPs-POM, while the broad bands in the range between 3308 and 3010 cm -1 can be attributed to the presence of R-NH 2 /R-NH 3 + groups 53 , 54 . The three relatively sharp transmission bands at 2917, 2840, and 1460 cm -1 are representative of the C sp3 -H stretching vibration of the organic functional groups of the MSPs-POM particles, i.e. , the PEG caps and the condensed APTMS groups 55 . Energy dispersive X-ray (EDX) analysis (Figs. 3 C and S4 ) also confirmed that the POM is colocalized with the particles.
To evaluate the stability of MSPs-POM under physiological conditions, their hydrodynamic diameter ( D h ) was monitored over time in 10X phosphate salt buffer (PBS, pH 7.4 at 37 °C). Figure 4 shows the DLS size distribution of the particles at time points 0, 1 day, and 5 days. The size distribution remains little affected over 5 d, indicating that both the spherical shape and size of the particles remain intact under physiological conditions. The stability of the particles is achieved by covalent surface passivation of the terminal silanol groups of the silica with PEG silane, which effectively prevents hydrolysis of the silica 56 , 57 . We also measured the Z-pot to determine whether the POMs could leak out of the mesopores over time, which would be indicated by a clear shift to more positive Z-pot values of the particles. However, over 5 days, the zeta potential of the particles dispersed in PBS showed no significant shift towards more positive Z-pot values (Fig. S5), which proves that the POMs stay adsorbed on the silica surface.
Preparation and characterization of substrate
The alkene-bearing reactive surfaces were prepared by silanization of 3-methacryloxypropyltrimethoxysilane (MPS-silane) on hydroxylated glass substrates, which can participate thiol-ene and thiol-amine click reactions (Fig. 5 A) 58 , 59 . To achieve efficient silanization, the cleaned glass coverslips were treated with oxygen plasma to remove remaining contaminants and to endow a high density of hydroxyl groups on the glass surfaces. Then, the activated glass slips were immersed in an MPS-silane solution in toluene to obtain the MPS-modified substrates. The characterization of MPS-modified substrates was carried out using atomic force microscopy (AFM), water contact angle (WCA), and x-ray photoelectron spectroscopy (XPS). The results are shown in Fig. 5 B–D.
The glass coverslips usually show a WCA at around 43° (Fig. S6 A), while just after plasma treatment, the value declines to around 0° (Fig. S6 B) due to the densely grafted hydroxyl groups. As shown in Fig. 5 B, after silanization of the plasma-treated glass surface with MPS-silane the WCA increases to a higher value at around 64.3°, which is also an indication of a successful silanization process. The mean surface roughness (Ra) of silanized surfaces obtained by AFM is around 0.251 nm, which is consistent with former reports concerning self-assembled monolayers (SAMs) 60 , 61 . AFM images of a glass coverslip before and after silanization are given in Figs. S6 C and 5 C, respectively. To further confirm successful silanization, high-resolution XPS (HR-XPS) was implemented on MPS-modified and bare glasses for comparison, the corresponding spectra at the C 1s region are illustrated in Fig. 5 D. The deconvoluted HR-XP spectra exhibit three peaks a 284.8 eV, 286.5 eV, and 288.9 eV, which are derived from (C–C, C–H), (C–O), and (O–C=O), respectively. Comparing the XP spectra of the bare glass and MPS-modified samples, the ratio of (C–O):(C–C, C–H) increases from 0.21 to 0.35, which indicates the formation of the MPS-modified SAM.
To validate the reactivity of the MPS-modified glass substrates and probe the possibility of creating microarrays via microchannel cantilever spotting (μCS) on these, small fluorescent molecules with either a thiol or amine moiety were chosen as test probes, namely NH 2 -Tetramethylrhodamine (NH 2 -Tamra) and SH-Fluorescein (SH-FITC) (Fig. 5 A). μCS works by loading a microchannel cantilever with a μL amount of ink containing the desired molecules, and then bringing cantilever repeatedly into contact with the substrate in a scanning probe lithography (SPL) setup allowing for high positioning control, as well as controlled humidity and dwell time. Each spot in the resulting droplet microarrays can be regarded as a microreactor permitting click reactions between the alkene terminal groups and thiol or amine moieties to take place. After the desired incubation time, excess ink can simply be washed away from the surface. As shown in Fig. 6 A, the NH 2 -Tamra firmly binds to the MPS-modified surface as the fluorescent pattern is strongly visible in the red channel. Similarly, the SH-FITC also works well with MPS-modified surfaces as shown in Fig. 6 B. Therefore, we confirm that the amine and thiol moieties all bind well to the alkene functionalized surface.
As a final test of reactivity and to allow a direct comparison of binding efficiency between thiol-ene and amine-ene routes, two biotinylated compounds, NH 2 -biotin and SH-biotin, were printed. When incubated with Cy3 conjugated streptavidin (SA-Cy3), the printed microarrays can bind the SA-Cy3 via the strong affinity between biotin and avidin 62 , and the binding density of the two microarrays can be directly compared over fluorescence intensity 60 , 63 , 64 . After incubation of bovine serum albumin (BSA) for blocking unspecific binding followed by incubation of SA-Cy3 (both in phosphate buffered saline (PBS)) on the biotinylated arrays, the biotin spots are easily readout on a fluorescent microscope as shown in Fig. 6 C and D. The corresponding fluorescence intensity comparison is shown in Fig. 6 E. The yield of the thiol-ene click route (9620.86 ± 1148.15) a.u. is about 21% higher than amine-ene route (7585.39 ± 1046.44) a.u. from the fluorescence, though both routes appear similar within the fluctuations, both routes deliver a comparable and stable binding to the surface.
Patterned immobilization of MSPs-POM
After having obtained amine-functionalized MSPs-POM (H 2 N-MSPs-POM, see Materials and methods) and the matching alkene-functionalized glass surface, capillary spotting was adopted to generate microarrays with MSPs-POM ink. The glass capillary tip has a larger aperture compared to μCS tips and can be tailored to different opening sizes, making it better suited for nanoparticle spotting. After spotting, the obtained microarrays were incubated at 37°C to complete the click-reaction and then washed with water to remove the excess ink. The results of the capillary spotting-based microarray are shown in Fig. 7 .
Directly after spotting, the microarray droplets are visible in bright-field and (due to labeling of the MSPs-POM with Cy3) fluorescence microscopy images (Fig. 7 A, B). After the click-reaction had taken place and washing, the microarray is not visible anymore in the bright-field, but the fluorescence images revealed the immobilization of MSPs-POM (Fig. 7 C). Close-up of individual immobilized MSPs-POM particles were obtained with SEM (Fig. 7 G) and AFM (Fig. 7 H), further confirming the successful binding to the surface and integrity of the modified MSPs-POM. | Results and discussion
Immobilization strategy
To imprint POMs on surfaces (Fig. 2 ), we aimed to use MSPs whose pore wall is functionalized with amino groups (MSPs + ), that are protonated and thus positively charged in an aqueous environment. Due to the size of the mesopores (≈ 3 nm) and the polyanionic nature of the POM, these cargo molecules will be retained in the positively charged pores 48 . Immobilization of the POM-loaded MSP (MSPs-POM) will then be enabled by functionalization of the particle surface with reactive amino groups, which can engage in amine-ene Michael additions to alkene-functionalized surfaces. The advantage of MSPs, therefore, is the possibility to circumvent the use of monofunctionalized POMs 49 , 50 , whose synthesis requires considerable effort, while providing a generalizable strategy for a wide range of POMs to be immobilized on solid substrates.
Synthesis of POM-loaded MSPs
To prepare POM-loaded MSPs, first MSPs functionalized with amino groups on the pore wall surfaces (MSPs + ) were prepared. To this end, pristine spherical nanometer-sized MSPs were synthesized following a previously reported modified Stöber synthesis protocol using the surfactant cetyltrimethylammonium bromide (CTAB) as the structure-directing agent 51 . This synthesis method was chosen because the resulting particles are in the nanometer range, have a spherical shape, have a high specific surface area (1243 m 2 g -1 ), and are mesoporous (pore diameter ≈ 3 nm)—all properties required for subsequent effective loading of POM. Scanning electron microscopy (SEM) analysis revealed that the pristine MSPs were spherical with an average diameter of (283 ± 25) nm (Fig. S1 ), which is in good agreement with previously published results.
The pristine MSPs were then passivated on the outer surfaces with a short polyethylene glycol chain (6–9 units) alkoxysilane 52 and then CTAB was extracted from the mesopores of the particles. Next, the CTAB-free pore walls were covalently grafted with amino groups using (3-aminopropyl)trimethoxysilane (APTMS), resulting in MSPs whose pore walls are functionalized with amino groups (MSPs + ). The surface passivation and amino functionalization (Fig. S2 A) were investigated by zeta potential (Z-pot) analysis (Fig. S2 B), which revealed a nearly neutral surface charge (0.4 ± 1.2) mV of MSPs + . This reduced negative Z-pot value (for non-functionalized and surfactant-extracted MSP, the Z-pot is around − 30 to − 40 mV) can be explained by the surface passivation that took place, which reduces the amount of deprotonated and negatively charged hydroxyl groups on the surface of the particles, as well as by charge balancing due to the positive R-NH 3 + groups present on the pore entrances.
Next, we optimized the procedure for loading MSPs + with POM (MSPs-POM) to eliminate the presence of non-loaded POM crystals. In suboptimal loading procedures, where not all POM was loaded within the pores of MSPs + , POM crystals were observed near or on the surface of MSPs, as demonstrated by SEM analysis (Fig. S3 A). By optimizing the mass ratio between POM and MSPs + used for the loading and adding several washing steps with EtOH and water, we were able to obtain MSPs-POM (Figs. 3 A and S3 B), which showed no crystalline POM precipitates (MSPs-POM). Z-pot measurements on MSPs-POM indicated more negative values compared to MSPs + particles, suggesting successful adsorption of POMs within the pores of the particles (Fig. S2 B). Attenuated total reflectance Fourier infrared spectroscopy (ATR-FTIR) recorded on MSPs-POM (Fig. 3 B) shows the characteristic POM-related transmission bands supporting the presence of POM in the mesopores of the silica nanoparticles (gray background). The FTIR spectrum of MSPs-POM exhibits vibrational bands within the 1200–400 cm -1 region, which closely resemble the characteristic bands found in the pristine POM. POMs feature unique metal–oxygen vibrational modes within the fingerprint region. The distinctive peaks at 1134 and 1054 cm -1 are attributed to the P–O vibrations of {GdP 5 W 30 } POM. The peak at 907 cm -1 could be assigned to terminal ν as (W══O t ) vibration. The features around 709 cm -1 could be attributed to the edge-sharing ν as (W–O–W). All these bands are considered as pure vibrations of the POM skeleton. The peak at 1629 cm -1 can be attributed to the δ(O–H) of the lattice water molecules. The broad transmission band at 3441 cm -1 can be attributed to the presence of -OH groups on the outer surface of MSPs-POM, while the broad bands in the range between 3308 and 3010 cm -1 can be attributed to the presence of R-NH 2 /R-NH 3 + groups 53 , 54 . The three relatively sharp transmission bands at 2917, 2840, and 1460 cm -1 are representative of the C sp3 -H stretching vibration of the organic functional groups of the MSPs-POM particles, i.e. , the PEG caps and the condensed APTMS groups 55 . Energy dispersive X-ray (EDX) analysis (Figs. 3 C and S4 ) also confirmed that the POM is colocalized with the particles.
To evaluate the stability of MSPs-POM under physiological conditions, their hydrodynamic diameter ( D h ) was monitored over time in 10X phosphate salt buffer (PBS, pH 7.4 at 37 °C). Figure 4 shows the DLS size distribution of the particles at time points 0, 1 day, and 5 days. The size distribution remains little affected over 5 d, indicating that both the spherical shape and size of the particles remain intact under physiological conditions. The stability of the particles is achieved by covalent surface passivation of the terminal silanol groups of the silica with PEG silane, which effectively prevents hydrolysis of the silica 56 , 57 . We also measured the Z-pot to determine whether the POMs could leak out of the mesopores over time, which would be indicated by a clear shift to more positive Z-pot values of the particles. However, over 5 days, the zeta potential of the particles dispersed in PBS showed no significant shift towards more positive Z-pot values (Fig. S5), which proves that the POMs stay adsorbed on the silica surface.
Preparation and characterization of substrate
The alkene-bearing reactive surfaces were prepared by silanization of 3-methacryloxypropyltrimethoxysilane (MPS-silane) on hydroxylated glass substrates, which can participate thiol-ene and thiol-amine click reactions (Fig. 5 A) 58 , 59 . To achieve efficient silanization, the cleaned glass coverslips were treated with oxygen plasma to remove remaining contaminants and to endow a high density of hydroxyl groups on the glass surfaces. Then, the activated glass slips were immersed in an MPS-silane solution in toluene to obtain the MPS-modified substrates. The characterization of MPS-modified substrates was carried out using atomic force microscopy (AFM), water contact angle (WCA), and x-ray photoelectron spectroscopy (XPS). The results are shown in Fig. 5 B–D.
The glass coverslips usually show a WCA at around 43° (Fig. S6 A), while just after plasma treatment, the value declines to around 0° (Fig. S6 B) due to the densely grafted hydroxyl groups. As shown in Fig. 5 B, after silanization of the plasma-treated glass surface with MPS-silane the WCA increases to a higher value at around 64.3°, which is also an indication of a successful silanization process. The mean surface roughness (Ra) of silanized surfaces obtained by AFM is around 0.251 nm, which is consistent with former reports concerning self-assembled monolayers (SAMs) 60 , 61 . AFM images of a glass coverslip before and after silanization are given in Figs. S6 C and 5 C, respectively. To further confirm successful silanization, high-resolution XPS (HR-XPS) was implemented on MPS-modified and bare glasses for comparison, the corresponding spectra at the C 1s region are illustrated in Fig. 5 D. The deconvoluted HR-XP spectra exhibit three peaks a 284.8 eV, 286.5 eV, and 288.9 eV, which are derived from (C–C, C–H), (C–O), and (O–C=O), respectively. Comparing the XP spectra of the bare glass and MPS-modified samples, the ratio of (C–O):(C–C, C–H) increases from 0.21 to 0.35, which indicates the formation of the MPS-modified SAM.
To validate the reactivity of the MPS-modified glass substrates and probe the possibility of creating microarrays via microchannel cantilever spotting (μCS) on these, small fluorescent molecules with either a thiol or amine moiety were chosen as test probes, namely NH 2 -Tetramethylrhodamine (NH 2 -Tamra) and SH-Fluorescein (SH-FITC) (Fig. 5 A). μCS works by loading a microchannel cantilever with a μL amount of ink containing the desired molecules, and then bringing cantilever repeatedly into contact with the substrate in a scanning probe lithography (SPL) setup allowing for high positioning control, as well as controlled humidity and dwell time. Each spot in the resulting droplet microarrays can be regarded as a microreactor permitting click reactions between the alkene terminal groups and thiol or amine moieties to take place. After the desired incubation time, excess ink can simply be washed away from the surface. As shown in Fig. 6 A, the NH 2 -Tamra firmly binds to the MPS-modified surface as the fluorescent pattern is strongly visible in the red channel. Similarly, the SH-FITC also works well with MPS-modified surfaces as shown in Fig. 6 B. Therefore, we confirm that the amine and thiol moieties all bind well to the alkene functionalized surface.
As a final test of reactivity and to allow a direct comparison of binding efficiency between thiol-ene and amine-ene routes, two biotinylated compounds, NH 2 -biotin and SH-biotin, were printed. When incubated with Cy3 conjugated streptavidin (SA-Cy3), the printed microarrays can bind the SA-Cy3 via the strong affinity between biotin and avidin 62 , and the binding density of the two microarrays can be directly compared over fluorescence intensity 60 , 63 , 64 . After incubation of bovine serum albumin (BSA) for blocking unspecific binding followed by incubation of SA-Cy3 (both in phosphate buffered saline (PBS)) on the biotinylated arrays, the biotin spots are easily readout on a fluorescent microscope as shown in Fig. 6 C and D. The corresponding fluorescence intensity comparison is shown in Fig. 6 E. The yield of the thiol-ene click route (9620.86 ± 1148.15) a.u. is about 21% higher than amine-ene route (7585.39 ± 1046.44) a.u. from the fluorescence, though both routes appear similar within the fluctuations, both routes deliver a comparable and stable binding to the surface.
Patterned immobilization of MSPs-POM
After having obtained amine-functionalized MSPs-POM (H 2 N-MSPs-POM, see Materials and methods) and the matching alkene-functionalized glass surface, capillary spotting was adopted to generate microarrays with MSPs-POM ink. The glass capillary tip has a larger aperture compared to μCS tips and can be tailored to different opening sizes, making it better suited for nanoparticle spotting. After spotting, the obtained microarrays were incubated at 37°C to complete the click-reaction and then washed with water to remove the excess ink. The results of the capillary spotting-based microarray are shown in Fig. 7 .
Directly after spotting, the microarray droplets are visible in bright-field and (due to labeling of the MSPs-POM with Cy3) fluorescence microscopy images (Fig. 7 A, B). After the click-reaction had taken place and washing, the microarray is not visible anymore in the bright-field, but the fluorescence images revealed the immobilization of MSPs-POM (Fig. 7 C). Close-up of individual immobilized MSPs-POM particles were obtained with SEM (Fig. 7 G) and AFM (Fig. 7 H), further confirming the successful binding to the surface and integrity of the modified MSPs-POM. | Conclusions
Our proof-of-concept study shows, that the loading of MSPs with POM in conjunction with SPL techniques is a viable route to create surface-immobilized microarrays of MSPs-POM. Encapsulation of POM in MSPs increases stability and enables immobilization of a larger amount of POM, which can outperform direct immobilization methods that typically use molecularly dissolved POM to form surface monolayers. The MSPs-POM is stable under physiological conditions for several days and can therefore be used also for applications in cell cultures. The microarray spotting approach of POMs presented here paves the way for future surface-based applications of POM composites. In particular, the Gd(III)-substituted POM presented here has potential implications for surface microarray-based sensing via surface NMR, acting as a strong molecular NMR chemosensor. The approach is versatile and can, for example, be easily transferred to the loading of other POMs for specific applications. Keeping in mind the capabilities of SPL techniques, in particular the possibility of multiplexing (i.e., deposition of different materials within the same micropatterns) and highly localized deposition (e.g., to deliver functional materials exclusively to prestructures and microdevices like electrodes or sensor devices) 65 , complex functional constructs can be fabricated. Taken together, we think that this MSPs-POM offers an attractive for the immobilization of functional POMs into surface-bound patterns for future applications in biomedical sensing and catalysis. | Polyoxometalates (POM) are anionic oxoclusters of early transition metals that are of great interest for a variety of applications, including the development of sensors and catalysts. A crucial step in the use of POM in functional materials is the production of composites that can be further processed into complex materials, e.g. by printing on different substrates. In this work, we present an immobilization approach for POMs that involves two key processes: first, the stable encapsulation of POMs in the pores of mesoporous silica nanoparticles (MSPs) and, second, the formation of microstructured arrays with these POM-loaded nanoparticles. Specifically, we have developed a strategy that leads to water-stable, POM-loaded mesoporous silica that can be covalently linked to alkene-bearing surfaces by amine-Michael addition and patterned into microarrays by scanning probe lithography (SPL). The immobilization strategy presented facilitates the printing of hybrid POM-loaded nanomaterials onto different surfaces and provides a versatile method for the fabrication of POM-based composites. Importantly, POM-loaded MSPs are useful in applications such as microfluidic systems and sensors that require frequent washing. Overall, this method is a promising way to produce surface-printed POM arrays that can be used for a wide range of applications.
Subject terms
Open Access funding enabled and organized by Projekt DEAL. | POMs are anionic oxo-clusters of early transition metals in their high oxidation states, which display unique chemical and physical properties on the basis of their structures, composition, sizes, rich redox chemistry and charges 1 , 2 . Prominent POM structures are the Keggin-type structure, [XM 12 O 40 ] n− , and the Wells–Dawson-type structure, [X 2 M 18 O 62 ] n− 3 . Their properties have been widely exploited in a variety of areas, including catalysis, materials science, photochemistry, molecular magnetism and medicine 4 – 6 . The high negative charge and multiple oxo-donor sites make them useful multidentate building blocks for constructing transition and lanthanide metal-based cluster complexes. Moreover, their capacity to engage with organic moieties enables the formation of hybrid assemblies with distinctive functionalities 7 . Such assemblies have exhibited significant promise as artificial photosynthesis systems, e.g. in water oxidation 8 and carbon dioxide reduction 9 . Among the larger investigated POM clusters, the well-known Preyssler - type anion 10 , 11 , of general formula [M n + P 5 W 30 O 110 ] (15− n )− (abbreviated as {MP 5 W 30 }) is a doughnut-shaped molecule with an inner cavity and 30 terminal O atoms. This polyanion can capture different cations with suitable size such as alkali metals and lanthanide ions (Ln) on the inner surface of the POM. The members of the {MP 5 W 30 } POM family are among the most robust, stable, and processable POM complexes, which are also stable over a wide pH range (pH 1–12) 12 . In addition, their unique biological compatibility and high surface charges result in a flexible architecture for interaction with target substrates, making these POMs a promising molecular sensor material 12 , 13 .
In this work, we use Preyssler-type POM K 12 [GdP 5 W 30 O 110 ]·15H 2 O (hereafter shortened as {GdP 5 W 30 }, structure shown in Fig. 1 ), synthesized from a Preyssler anion [NaP 5 W 30 0 110 ] 14 in which the central Na + cation was replaced by the Gd III ion under hydrothermal conditions. Gd(III)-substituted POMs play a key role in the development of advanced nanomaterial-based contrast agents for nuclear magnetic resonance (NMR) imaging 15 , 16 and thus have great potential for molecular quantum sensing applications 17 , 18 . These contrast agents are compounds used to enhance the visibility of various tissues, structures, substances, or pathological conditions. They achieve this by emphasizing disparities or boundaries between different substances or tissue types. In the field of imaging, Gd-containing POMs have been found to exhibit a higher r 1 value compared to commercial contrast agents. This improvement is attributed to the significant molecular weight and the rigid framework structure of POMs, which lead to an extended rotational correlation time and an increased r 1 value 19 , 20 .
However, pure inorganic POMs are usually soluble in many polar solvents, causing difficulties in the recovery, separation, and recycling of the materials 21 . This poses challenges in biomedical and catalysis applications and also for surface-based sensing, where immobilization at high-density would be desirable. To address this problem, various methods for immobilizing POMs into heterogeneous matrices have been developed. The resulting hybrid composites have found applications in various fields, including catalysis 22 , energy conversion and storage 23 , 24 , molecular sensors and electronics 25 . The immobilization of POMs into cationic silica nanoparticles through covalent or electrostatic binding can yield a novel material, potentially creating a biocompatible nanomaterial with luminescent properties suitable for biosensing and imaging applications 26 – 28 . In particular, the immobilization of Gd-containing POMs into structured microarrays could enable the use of surface NMR-based methods for molecular quantum sensing, as demonstrated recently in diamond films 29 , 30 , with the direct use of Gd enhancing the NMR signal instead of nitrogen-vacancy (NV) centers acting as a proxy for readout.
In our attempt to find a straightforward approach to immobilize {GdP 5 W 30 } POM on a glass surface, we decided to use nanometer-sized mesoporous silica particles (MSP) 31 – 33 as the POM carriers that, can be covalently tethered onto solid substrates. The choice of MSPs to serve as the POM carrier bears several advantages. First, MSPs can be prepared with tunable porosity (2–50 nm) 34 , sizes and shape 35 , 36 , and possess high surface areas (up to 1000 m 2 ∙g −1 ) 37 . Second, MSPs can be functionalized with a variety of organic functional groups via alkoxysilane chemistry 38 , 39 . The functionalization of MSP can occur on the outer surface of the particles or the surface of their inner pore walls 40 , 41 . For example, "clickable" functional groups can be grafted onto silica surfaces by sol–gel chemistry with organoalkoxysilanes bearing amine 42 , 43 , or azide functional groups 44 , 45 , which has been used in the past for covalent immobilization of silica and silica-based (nano)particles on solid substrates 46 , 47 .
Supplementary Information
| Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-023-50846-2.
Acknowledgements
This work was partly carried out with the support of the Karlsruhe Nano Micro Facility (KNMFi, www.knmf.kit.edu ), a Helmholtz Research Infrastructure at Karlsruhe Institute of Technology (KIT, www.kit.edu ). This work was supported by KNMFi project Nr. 2020-023-028480. B.Y. and W.W. acknowledge support by the China Scholarship Council fellowship (No. 201807040067, and No. 202106240010, respectively, CSC, www.csc.edu.cn ). S.M. acknowledges support from the Helmholtz Association via the Program NACIP ( http://www.nacip.kit.edu ).
Author contributions
Experiment Design: B.Y., Y.W., M.I., M.H., A.P.; POM synthesis: M.I., K.B.; Synthesis and characterization of MSPs-POM: P.P., K.B.; Printing experiments: B.Y. W.W., C.S., Optical microscopy: B.Y., W.W., C.S.; Electron microscopy and analysis: S.M.; XPS measurements: D.M., A.S.; Data analysis: B.Y., P.P., W.W., Y.W., D.M., A.S.; Data curation: M.I., M.H., A.P.; Supervision: M.I., M.H., A.P.; Writing initial manuscript: B.Y., P.P., M.I., M.H.; All authors were involved in revising the manuscript.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Data availability
The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:58 | Sci Rep. 2024 Jan 13; 14:1249 | oa_package/9c/58/PMC10787769.tar.gz |
|
PMC10787770 | 38222190 | Introduction
Acute scrotal pain is a relatively common emergency presentation, both in the primary care setting and in the emergency department, comprising approximately 0.5% of all emergency visits in the United States annually [ 1 ]. Testicular torsion is a true urological emergency in such cases of acute scrotal pain. Torsion is a time-sensitive condition in which twisting of the spermatic cord occurs and testicular blood supply compromise ensues, leading to acute onset severe scrotal pain [ 1 ]. Understanding the anatomy of the testicle is important in comprehending the pathophysiology of torsion. The tunica vaginalis is usually firmly attached to the posterolateral aspect of the testicle, and within it, the spermatic cord is not mobile. In cases where the attachment of the tunica vaginalis is high, the spermatic cord can twist more easily inside, leading to intravaginal torsion [ 1 ].
The incidence of testicular torsion is highest amongst prepubertal males; however, it can occur at any age [ 1 ]. Torsion typically presents with symptoms of acutely painful hemi-scrotum with a tender, elevated testis felt at a horizontal lie on clinical examination. As arterial blood supply is abruptly ceased, testicular detorsion is a race against time. Every hour that passes from the onset of symptoms has been shown to decrease the salvageability rate of the torted testis. Another significant factor that impacts testicular salvage is the degree of torsion. In most cases, 90-180 degrees of testicular rotation is capable of compromising testicular blood flow. Further degrees of torsion are rarer and significantly decrease the viability of the testes. The best salvage rates are seen within less than eight hours from the onset but become rare if more than 24 hours have elapsed [ 1 , 2 ].
A testicular ultrasound can be invaluable when available in a timely manner; however, it must not delay quick surgical intervention. It is considered the main adjunctive diagnostic modality beyond clinical examination. A color Doppler flow ultrasound for testicular torsion is approximately 93% sensitive and 100% specific, aiding in both diagnosis of testicular torsion as well as an assessment of testicular volume [ 3 , 4 ]. Once the diagnosis is made, the standard of care is immediate surgical intervention for testicular detorsion and bilateral orchidopexy if the testis is viable, or orchidectomy if necrosis has occurred [ 5 ].
The surgical management of testicular torsion depends on whether the testis is salvageable during surgical exploration. A black-colored testis is deemed necrotic, leading to orchidectomy, while a purple to whitish-pink-colored testis is considered viable, and bilateral orchidopexy is performed [ 5 , 6 ]. One major sequela following orchidopexy for torsion is the decrease in testicular volume. As testicular volumes decrease so does its ability for spermatogenesis and testosterone production. A poorly functioning testis can have long-term effects on patients in terms of fertility, as well as decreased libido, sexual dysfunction, and psychological impacts [ 6 ].
The aim of this study is to assess testicular volume loss post orchidopexy in patients who presented with testicular torsion as well as to identify the significance of the degree of rotation and duration of torsion in post-fixation volume loss. | Materials and methods
All patients who underwent scrotal exploration for a primary diagnosis of testicular torsion between June 1, 2016, and January 15, 2023, were reviewed. All data were recorded from the hospital’s electronic database. Patients were excluded if they underwent an orchidectomy, had a diagnosis other than testicular torsion once scrotal exploration was done, or did not perform a follow-up scrotal ultrasound. Additionally, patients who were referred from other centers and had preoperative ultrasounds done outside our institute or who underwent an orchidopexy for undescended testis earlier in life were excluded.
The information obtained from the electronic files included the patients’ demographics such as age, duration of symptoms, and laterality. Images were reviewed for preoperative ultrasound findings, which included confirmation of testicular torsion as well as testicular volume measurements. Routine postoperative scrotal ultrasound is not done in our center unless patients have postoperative concerns that necessitate it. However, patients with at least six months of follow-up were contacted by phone and testicular volumes were measured by scrotal ultrasound. Testicular measurement was done using the formula of length (mm) × width (mm) × weight (mm) × 0.72. All scrotal ultrasounds were done using a GE LOGIQ E9 ultrasound machine (General Electric, Boston, Massachusetts, United States) using a linear 9 Hz transducer probe. All radiographic reporting was done by a senior radiology resident.
The local protocol in our center for a patient who presents with acute scrotal pain is simultaneous immediate shifting to ultrasound assessment and urological consultation. Once testicular torsion is suspected, patients are booked and shifted for surgery. The standard operative procedure practiced in our center is vertical scrotal incision starting on the affected side followed by delivery of the torted testis and assessment for viability and color followed by prompt detorsion while recording the degree of torsion. Once detorted, warm compressors are kept, and contralateral orchidopexy is performed. If the torted testis regains color and is visually viable, a dartos pouch is fashioned and 3-point fixation at 3, 6, and 9 o’clock is done using 3-0 Vicryl (Ethicon Inc., Raritan, New Jersey, United States). The dartos layer is also closed using 3-0 Vicryl, whereas skin closure is done using Rapide Vicryl. All surgeries were done under spinal anesthesia and performed by a senior urology resident.
For statistical analysis purposes, degrees of testicular torsion were classified into mild (90-180 degrees), moderate (180-360 degrees), and severe (>360 degrees). Furthermore, time for surgery was recorded in hours from the onset of symptoms until surgery start time and classified into mild (Less than four hours), moderate (four to six hours), and severe (more than six hours). A linear regression model was used to predict the relationship between testicular volume loss and the independent variables of degree of torsion and time to surgery. The equation used for the regression model was “Volume = β0 + β1 * Independent variable” in which β1 is the regression coefficient for the degree of torsion and β0 is the intercept.
Additionally, given that time is an ordinal value, Spearman correlation coefficients were utilized to assess the relationship between the time of surgery and postoperative testicular volume loss. All statistical analysis was conducted using IBM SPSS Statistics for Windows, Version 29.0 (Released 2022; IBM Corp., Armonk, New York, United States), and 95% confidence intervals were calculated for the treatment’s success rates with p-values of < 0.05 considered statistically significant. | Results
A total of 109 patient records were reviewed within the specific time frame. Forty-seven patients were excluded as per the exclusion criteria mentioned, which gave us a sample size of 62 patients. The patient and surgical parameters are given in Table 1 .
Our data showed that 29 (46.7%) patients presented with right-sided testicular torsion and 33 (53.3%) patients presented with left-sided testicular torsion. In terms of degrees of torsion, 19 patients (30.6%) had a mild degree whereas 28 (45.1%) patients and 15 (24.1%) patients had moderate and severe torsion respectively. The mean preoperative testicular volume on the unaffected side was 17.9 ml + 1.7 and the postoperative mean volume was 17.5 ml + 1.9. Comparatively, the mean preoperative volume of the affected testis was 18.5 ml + 2.1 whereas the mean postoperative volume was calculated for the different degrees of torsion and was as follows: mild 18.0 ml + 0.7, moderate 16.5 ml + 0.4, and severe 13.6 ml + 0.6.
In terms of time to surgery, 14 (22.5%) patients were considered within the mild group (< four hours), 31 (50%) patients and 17 (27.4%) patients were considered moderate (four to six hours), and severe (> six hours) respectively. The mean preoperative testicular volume in both the unaffected and affected side are identical to the previously mentioned volumes. However, the mean preoperative volume and the mean postoperative volume were calculated for the different times to surgery and were as follows: mild 17.8 ml + 0.5, moderate 16.2 ml + 0.3, and severe 12.9 ml + 0.8.
Figure 1 illustrates how the mean testicular volume loss in ml increases as the severity of the degree of torsion and time to surgery increases. However, it can be noted that time to surgery (orange curve) has a more pronounced effect on the mean volume loss than the degree of torsion.
Table 2 and Table 3 demonstrate the results of the linear regression models as they pertain to the severity of the degree of torsion and time to surgery with the postoperative testicular volume loss. Furthermore, the following results describe the relationship between the severity of the time to surgery and the postoperative volume loss in the affected testis as seen in Table 3 . Increasing severity of the degree of torsion as well as the time for surgery have statistically significant (p-value <0.05) effects on postoperative testicular volume loss in ml.
Spearman correlation coefficients quantifying the relationship between postoperative testicular volume loss and severity of time to surgery were calculated and showed mild torsion: ρ = 0.65 (p < 0.05) moderate torsion: ρ = 0.52 (p < 0.05), severe torsion: ρ = 0.40 (p < 0.05). This is a positive correlation and signifies that as time to surgery increases, postoperative testicular volume loss tends to be higher. Moreover, the analysis showed that on average with every additional hour from the onset of symptoms to surgery, the approximate volume loss will be 0.15 ml; however, once time exceeds the 4.5-hour mark, the mean volume loss is 0.4 ml for every additional hour. | Discussion
The management of testicular torsion is immediate surgical intervention with the aim of untwisting the spermatic cord and restoring blood supply to the affected testis as soon as possible [ 7 ]. Testicular salvageability is directly correlated to the time it takes to undergo surgical correction [ 8 ]. This has been best described in the systematic review done by Mellick et al., which concluded that if the surgical correction is conducted within less than six hours, the testis salvageability rate is around 97.2%, whereas this number decreases to 7.4% in patients who present after 48 hours [ 9 ]. Moreover, testicular spermatogenesis and hormonal production are proportionately affected by the total testicular volume; hence, testicular volume following repair is an important factor for determining postoperative testis function. Mellick et al. also noted a permanent effect on testicular spermatogenesis and endocrine function, which occurs once the period of torsion exceeds eight hours [ 9 ]. These findings align with what we observed in our study, which showed that the longer the time increases, the higher the volume loss. However, when comparing our findings to the systematic review, we notice that significant postoperative volume loss can occur in patients who underwent orchidopexy as early as four hours after the onset of symptoms.
Furthermore, we demonstrated that time is clearly the more important determinant of postoperative testicular volume loss, as seen in Figure 1 . However, the effect of the degree of torsion cannot be understated. This is clear in the comparative graph in Figure 1 , which demonstrates the different mean volume losses between the different severity grades of time and degree of torsion. Additionally, our regression model values show a significant statistical correlation between higher degrees of testicular torsion and increased postoperative volume loss. Howe et al., in their study conducted in 2017, also looked into the degree of twisting and its clinical significance on testicular torsion outcomes [ 10 ]. They concluded that when the spermatic cord undergoes more than 360 degrees of twisting, there is up to a 25% chance of orchiectomy. However, in the current study, we were more concerned with the salvageable testicular volume, and to add to their findings, we conclude that a severe degree of torsion (360 degrees and more) was statistically (p-value <0.05) associated with the highest amount of volume loss where we saw a mean volume loss of around 4.9 ml.
Additionally, in a study conducted in 2016 by Dias Filho et al., they reviewed the spermatic cord rotation effect on the outcomes of intravaginal testicular torsion [ 11 ], and demonstrated similar findings to the current study. They concluded that presentation delay is the major factor in determining surgical outcomes. However, the degree of spermatic cord rotation exerts a multiplicative effect on time to surgery and increases the chances of orchiectomy. Concurrently, they also found that both presentation delay, as well as degree of torsion, were inversely proportional to chances of orchidopexy [ 11 ]. However, they did not study the exact effect that these variables have on post-orchidopexy volumes as was done in the present study.
When looking at the time to surgery as an independent factor for testicular volume loss, it can be seen from Figure 1 that there appears to be a directly proportionate relationship in which more time leads to higher volume loss. Although the relationship is directly proportionate [ 12 ], it is not particularly linear. This was noted when our statistical analysis showed that the mean testicular volume loss per hour appears to be significantly higher once the time to surgery exceeds 4.5 hours. Comparatively, the total time average volume loss from the onset of symptoms to surgical correction is only 0.15 ml. This signifies within four to five hours from the onset of symptoms, significant volume loss should be expected even if orchidopexy is done, and the patient should be counseled accordingly to manage the post-operative expectations.
Furthermore, another study published in 2015, which investigated the factors influencing testicular atrophy following torsion, showed that if the time to surgery exceeds 24 hours, then 91% of patients are expected to develop significant testicular atrophy postoperatively [ 13 ]. Our findings are consistent with this; however, we observed that the onset of significant atrophy actually occurs around four hours from the onset of symptoms. This underlines the importance of immediate diagnosis to prevent long-term permanent damage.
The psychological and clinical impacts of post-orchidopexy testicular volume loss cannot be understated. In addition to concerns that arise regarding future fertility, the masculine perception of the individual can be affected by the physical size of the testis, leading to feelings of inadequacy and even low self-esteem [ 14 , 15 ]. It is well established that libido is considerably affected by stress and can lead to performance anxiety [ 15 ]. Patients who suffer from abnormally small testicular sizes or particularly from an uneven-appearing scrotum might experience an added psychological effect leading to decreased ability to perform sexually and even a tendency to avoid sexual intercourse due to fear of being stigmatized [ 16 ].
Limitations of our study include the fact that ultrasound imaging was not performed by the same radiologist, which can lead to differences in calculating testicular volumes. This was difficult to address given that the presentation of torsion is an acute emergency and imaging needs to be done immediately by the on-call senior radiologist without the possibility of delay. Furthermore, the postoperative testicular volume was measured at approximately six-month intervals due to resource and schedule limitations. Further imaging at longer intervals, such as one and two years postoperatively, can help further assess the effects torsion has on testicular volume. | Conclusions
Our study indicates that earlier surgical intervention and correction of torsion are associated with enhanced preservation of postoperative testicular volume. Both the degree of torsion and time to surgery influence mean volume loss; however, time to surgery shows a greater effect on mean volume loss. These results highlight the importance of early diagnosis and intervention in cases of testicular torsion to minimize the risk of long-term testicular volume loss. | Introduction
Testicular torsion is an urological emergency. It is a time-sensitive condition in which twisting of the spermatic cord and testicular blood supply occurs, causing acute onset severe scrotal pain. The incidence of testicular torsion is highest amongst prepubertal males; however, it can occur at any age. Every hour that passes from the onset of symptoms has been shown to decrease the salvageability rate of the torted testis. Another significant factor that impacts testicular salvage is the degree of torsion. Prompt surgical exploration of the scrotum and orchidopexy, if the testis is salvageable, is the mainstay of treatment. A major sequela following orchidopexy for torsion is the decrease in testicular volume. The aim of this study is to assess testicular volume loss post orchidopexy in patients who presented with testicular torsion, as well as to identify the significance of the degree of rotation and duration of torsion in post-fixation volume loss.
Methods
This is a retrospective study in which all patients who underwent scrotal exploration for a primary diagnosis of testicular torsion between June 1, 2016, to January 15, 2023, were reviewed. The information obtained included the patients’ demographics such as age, duration of symptoms, and laterality. Ultrasound images were reviewed for pre- and postoperative findings which included confirmation of testicular torsion as well as testicular volume measurements. Patients were excluded if they underwent an orchidectomy, had a diagnosis other than testicular torsion once scrotal exploration was done, or did not perform a follow-up scrotal ultrasound. Additionally, patients who underwent an orchidopexy for undescended testis earlier in life were also excluded. For statistical analysis purposes, degrees of testicular torsion and time to surgery were classified into mild, moderate, and severe.
Results
A total of 109 patient records were reviewed within the specific time frame. Of these, 47 patients were excluded as per the exclusion criteria mentioned previously, which gave us a sample size of 62 patients. Our findings showed that increasing severity of the degree of torsion as well as the time for surgery have statistically significant (p-value <0.05) effects on postoperative testicular volume loss. However, it was noted that time to surgery has a more pronounced effect on the mean volume loss than the degree of torsion. Moreover, the analysis also showed that, on average, with every additional hour from the onset of symptoms to surgery, the approximate volume loss is 0.15 ml. However, once time exceeds the 4.5-hour mark, the mean volume loss is 0.4 ml for each additional hour.
Conclusion
The current study indicates that earlier surgical intervention and correction of torsion are associated with enhanced preservation of postoperative testicular volume. Both the degree of torsion and time to surgery influence mean volume loss; however, time to surgery has a greater impact on the mean volume loss. These results highlight the importance of early diagnosis and intervention in cases of testicular torsion to minimize the risk of long-term testicular volume loss. | CC BY | no | 2024-01-15 23:41:58 | Cureus.; 15(12):e50543 | oa_package/d3/73/PMC10787770.tar.gz |
||
PMC10787771 | 38222220 | Introduction
Obesity, which is considered a multifactorial and complex disease that negatively affects health, is one of the most important causes of preventable deaths today. It contributes to the development of many health problems, such as type 2 diabetes mellitus (DM), cardiovascular disease (CVD), hypertension (HT), hyperlipidemia (HL), cerebrovascular disease, various cancers, obstructive sleep apnea syndrome (OSAS), fatty liver, gastroesophageal reflux, polycystic ovary syndrome (PCOS), osteoarthrosis, and depression [ 1 , 2 ]. Therefore, it creates a significant burden on the health budgets of societies. The prevalence of obesity is increasing in our country and reaching epidemic proportions as it is all over the World. In the Turkey Diabetes Epidemiology Studies (TURDEP) conducted in 1998 and 2010, it was seen that the prevalence of obesity in our country increased from 22.3% to 31.2% [ 3 , 4 ]. In 2016, the WHO reported that the country where obesity is most common in Europe is Turkey, with a prevalence of 29.5% [ 5 ].
Three methods are used in obesity treatment: lifestyle change, pharmacotherapy, and bariatric surgery. Clinical studies have shown the effectiveness of lifestyle change and behavioral interventions in obesity. Drugs that provide 5% weight loss in three to six months have been accepted as effective in the treatment and have been approved by pharmaceutical institutions worldwide for managing chronic obesity. Adding pharmacotherapy to lifestyle changes helps with further weight loss. It facilitates patients' compliance with treatment and helps improve obesity-related health risks, thus contributing to increased quality of life.
Liraglutide (LG), one of the limited number of obesity drugs in our country, is a long-acting GLP-1 receptor agonist (GLP-1 RA) that is resistant to metabolism by the dipeptidyl peptidase (DPP)-IV enzyme [ 6 ]. GLP-1 analogs induce weight loss through many central and peripheral mechanisms. It stimulates glucose-dependent insulin release, reduces the glucagon response, and reduces appetite by slowing gastric emptying [ 7 ]. In vitro studies have shown that liraglutide has a central effect and directly stimulates the "cocaine- and amphetamine-regulated transcript" and "pro-opiomelanocortin" neurons and indirect inhibition in neurons expressing "Agouti-related peptide" and "neuropeptide Y" in the arcuate nucleus of the hypothalamus [ 8 ]. Thus, appetite is suppressed, energy intake is reduced, and weight loss occurs with these mechanisms. The positive effects of LG on weight loss and metabolic parameters have been emphasized in various studies [ 9 - 13 ].
In Turkey, LG 3 mg (Saxenda®; Novo Nordisk, Bagsvaerd, Denmark) was approved for treating obesity in May 2018. In this study, we aimed to evaluate the effects of LG treatment on weight loss, glycemic and lipid parameters, and the side effects (SE) of this drug as a contribution to the limited number of studies conducted in our country. | Materials and methods
Study design and participants
This single-center and retrospective study was approved by the Ethics Committee of Bursa City Hospital (approval number 2023-19/2) and was performed per the Declaration of Helsinki.
The 67 participants between 18 and 65 years old who used liraglutide for at least 16 weeks due to obesity treatment between July 2020 and September 2022 in Bursa City Hospital were included in the study. Individuals who had undergone bariatric surgery, previously used glucagon-like peptide-1 receptor agonists (GLP-1RA) or another drug that affects weight, and had diseases that could cause weight loss, such as cancer, physiatric disease, eating disorders, and chronic kidney disease, were not included in the study. Pregnant or breastfeeding women are also excluded. LG was given to obese people (BMI>30kg/m2) who could not achieve adequate weight loss despite complying with lifestyle changes or to people with BMI>27 kg/m2 and at least one comorbidity (uncontrolled diabetes mellitus (DM), hypertension (HT), obstructive sleep apnea syndrome (OSAS), hyperlipidemia (HL), etc.)
Treatment was started with 0.6 mg daily, and the dose was titrated weekly and increased to 3 mg/day, according to side effects (SEs). All patients were also given a personalized low-calorie-restricted diet and at least 150 minutes of weekly physical activity.
The participants' diagnoses, medications, prescriptions, demographic characteristics, and laboratory results were accessed in the hospital computer database. The body weight (BW), body mass index (BMI), comorbidities of the patients, and follow-up laboratory results at four and 16 weeks were recorded. The patients were questioned about possible drug-related SEs at each follow-up examination.
The BW was detected on a scale without shoes and extra clothing. BMI was calculated as the BW (kilograms) divided by the squared height (in meters). All biochemical parameters were analyzed from serum samples after eight hours of fasting. Plasma values of fasting glucose (FG), fasting insulin (FI), glycosylated hemoglobin (HbA1c), and lipid profile [triglycerides (TG) and low-density lipoprotein (LDL)] were recorded. Homeostatic Model Assessment-Insulin Resistance Index (HOMA-IR): FG (mg/l)' FI (mU/l) /405 was used to measure insulin resistance for all individuals. Type 2 DM, prediabetes, HT, HL, OSAS, PCOS, and a history of CVD were obtained from the patient's records. Prediabetes and type 2 DM were diagnosed according to the American Diabetes Association's diabetes diagnostic criteria [ 14 ].
Statistical analysis
We used the IBM SPSS 21.0 Statistic version 23 package program (IBM Inc., Armonk, New York) and performed statistical analyses. Continuous variables were expressed as mean ± standard deviation for descriptive statistics, and categorical variables were expressed as frequency and percentages. The Shapiro-Wilk test was used to test the normal distribution. The mean values of variables with normal distribution were compared using Student's t-test or analysis of variance (ANOVA) and those without a normal distribution by the Mann-Whitney U test. The significance level was considered as p-values <0.05. | Results
Seventy-one patients were evaluated in the study. Due to the higher cost of LG, 10 participants could not continue the treatment for 16 weeks. Therefore, it was not included in the statistical analysis. Sixty (89.5%) women and seven (10.5%) men with a mean age of 42.8 ± 4.4 years met the inclusion criteria. At the beginning of the study in patients, mean BW and BMI were 103.8±18.7 kg and 35.2±7.21kg/m2. The participants' baseline characteristics are presented in Table 1 . Patients were classified as overweight, class 1, class 2, and class 3 obese according to their BMI. It was determined that most patients were in the class 2 obese (n=17, 25.3%). Of the study patients, 19 (28.4%) were prediabetic, 45 (67.1%) were normoglycemic, and three (4.5%) were diabetic. There was no concomitant disease in 38 (56.7%) patients. Other comorbidities are shown in Table 1 .
In this study, all patients reached the LG 3 mg/day target dose and were followed up. The mean BW levels decreased from 103.8±18.7 kg at the beginning of the therapy to 97.6± 17.5 at four weeks and 92.1± 16.4 at 16 weeks. Comparative analyses were found between baseline and four weeks (p=0.023), baseline and 16 weeks (p<0.001), and four and 16 weeks (p=0.019). The mean BMI decreased from 35.2 ± 7.21kg/m2 at baseline to 33.72±7.22 kg/m2 at four weeks and 29.61± 7.14 kg/m2 at 16 weeks. Comparative analyses were conducted between baseline and four weeks (p=0.045), baseline and 16 weeks (p<0.001), and four and 16 weeks (p=0.034). At the end of the 16 weeks, the percentage of body weight loss (BWL) change was found to be comparable between obesity classes 1, 2 and 3 (-9.81±1.93 %, -11.02±2.11 %, and -12.94±2.94respectively; p=0.954), and similar rates of ≥5% BWL were achieved between the three groups (72.6 %, 74.8 % and 78.5 %, respectively; p=0.623).
When evaluated according to their average BWL, the mean BWL and BMI loss of patients using LG in the first four and 16 weeks after treatment initiation were -6.17 ± 1.34 kg, -1.51 ± 1.25 kg/m2, and -11.71±2.21 kg, -5.56±1.88 kg/m2, respectively (Table 2 ). After the four and 16 weeks of beginning the LG use, the patients who lost more than 5% of initial BW were 38.8% vs. 76.1%, respectively (p=0.034). At four weeks, 14.9% of participants had ≥10% BWL, and this rate increased to 59.7% at 16 weeks (Table 2 ).
Changes in metabolic parameters such as fasting glucose (FG), fasting insulin (FI), homeostatic model assessment-insulin resistance (HOMA-IR), glycosylated hemoglobin (HbA1c), low-density lipoprotein (LDL), and triglycerides (TG) values before starting treatment and at the four and 16 weeks are summarized in Table 3 . A statistically significant difference was observed between the baseline, week four, and week 16 mean HOMA-IR values (p<0.001). The baseline HOMA-IR levels were statistically significantly higher than week four and week 16 HOMA-IR levels (p<0.001). A statistically significant difference was observed in HbA1c levels between baseline, four weeks, and 16 weeks (p<0.001). The 16-week mean HbA1clevels were statistically significantly lower than the baseline and 4-week mean HbA1c levels (p<0.001). Similar findings were shown for mean FG and FI levels, with the 16-week levels significantly lower than baseline and 4-week (p<0.001, p<0.001, respectively). There was a significant decrease in baseline LDL and TG concentrations at the end of the four and 16 weeks (p<0.001, p<0.001, respectively). In contrast, the difference between the four and 16 weeks was insignificant (p=0.234, p=0.089, respectively).
While 45 (67.2%) patients did not experience any SEs after starting LG treatment, the most common SEs were nausea (29.4%), abdominal pain (11.8%), vomiting (10.3%), diarrhea (7.2%), and others (15.9%) (headache, dyspepsia, influenza-like symptoms, constipation). Despite these digestive SEs, none of the patients discontinued their treatment. | Discussion
This study aimed to demonstrate the effectiveness and SE profile of overweight and obese patients with LG treatment evaluated in clinical practice. The findings of this retrospective study showed that mean BW, BMI, FG, FI, HOMA-IR, HbA1c, LDL, and TG levels were significantly reduced in obese or overweight patients at the 16-week follow-up.
Obesity is a chronic disease associated with high morbidity and mortality risks and limited quality of life that require long-term medical attention. In addition, the increase in health expenditures brings heavy burdens to the country's economies. Treatment options include diet and exercise, medication, or surgery. Many drugs with different mechanisms of action can be used to treat obesity. Although pharmacotherapy is an effective method in the treatment of obesity, drug costs are often a limiting factor. Obesity treatment options worldwide include phentermine, phentermine/topiramate, lorcaserin, naltrexone/bupropion, diethylpropion, orlistat, and LG. In Turkey, only orlistat and LG are approved for use in treating obesity. Pancreatic lipase inhibitor orlistat has serious gastrointestinal SEs, and it is difficult to achieve tolerability. In a European study, GLP-1 analogs are more effective in weight loss than orlistat and glimepirid [ 15 ]. In another retrospective study from Spain, BWL with LG (-7.7 kg) was significantly greater than that observed with orlistat (-3.3 kg), and approximately two and a half times more patients lost at least 5% of their initial BW with LG than with orlistat [ 16 ]. Opioid antagonists with antidepressant effects Naltrexone-bupropion, sympathomimetic phentermine+ topiramate, and pramlintide were found to be as effective as LG in weight loss, but they were not recommended due to SEs; therefore, the use of LG comes to the fore [ 17 ].
The SCALE randomized controlled clinical trial followed 3731 participants with obesity receiving LG; over 13 months, 63.2% and 33.1% of all participants significantly lost at least 5% and 10% of their BW, respectively [ 9 ]. In a meta-analysis, Konwar et al. included approximately 6000 patients who did not have DM but were obese and using LG and observed 2.8-11.8 kg of BWL in their follow-up at 12-56 weeks [ 18 ]. Recently, Cetiner et al. from Turkey evaluated 201 patients using LG for 12 months and showed significant BWL. After three months from the LG treatment, 72.14% of the patients (n=145/201), and at the end of six months, almost all (n=96/106) had more than 5% weight loss observed. Additionally, the mean weight loss was 17.79 ± 8.93 kg for those who continued treatment for 12 months [ 19 ]. Our investigation determined that LG 3.0 mg for patients with obesity or overweight led to significant BWL of 6.1 to 11.7 kg (4.9%-10.9%) at four and 16 weeks of treatment, respectively. The results revealed that over 75 % and 55% of the participants who used the LG for the initial 16 weeks achieved ≥ 5% and ≥10%BWL, respectively. These findings are similar to or mostly higher than those reported in previous randomized controlled clinical trials [ 9 , 20 - 22 ]. Additionally, our results are superior to those of Italian [ 10 ], Canadian [ 11 ], and Spanish [ 16 ]; real-life studies demonstrated that 64% to 68% of patients exhibited >5% BWL and 20% to 35% of patients exhibited >10% BWL at four to seven months of treatment. In contrast, our study compared to a smaller cohort from Switzerland (n=54) showed that the percentage of patients reaching ≥5% weight loss at 16 weeks was lower [ 12 ]. In that study with a four-month follow-up, 87% of subjects showed ≥5% BWL, and this percentage increased to 96% at 10 months [ 12 ]. This difference in results can be attributed to different nutritional habits between populations. In Turkey, where dietary habits consume extremely high amounts of carbohydrates, transitioning to a low-calorie diet combined with the appetite-suppressing effects of LG may have resulted in more significant weight loss in a short period.
In patients using LG, diabetes regulation is impaired due to its effect on decreased appetite and gastrointestinal intolerance. In addition, significant weight losses were noted. A recently published randomized placebo-controlled trial of LG found that patients who experienced nausea achieved a more significant absolute weight loss [ 23 ]. Previous studies found a direct correlation between drug dose and weight loss [ 10 , 20 ]. The BWL increases, especially when the dose is increased to 2.4-3.0 mg/day [ 24 ]. In this study, all patients started with LG 0.6 mg and reached the 3mg maximum dose with dose titration within four weeks. Therefore, the relationship between weight loss and drug dose could not be evaluated.
In another study conducted in Canada, the effectiveness of LG was determined according to the degree of obesity, and no difference in the effectiveness of LG was found in stage 1, stage 2, and stage 3 obese patients [ 11 ]. In our study, the percentage of BW change and ≥5% BWL were similar between obesity classes.
Our study, consistent with a recent meta-analysis, showed a significant decrease in glycemic control variables such as HbA1c, FG, FI, HOMA-IR, and fasting lipid parameters [ 25 ]. However, it is unclear whether GLP-1RAs ameliorate metabolic parameters to the same extent in obese patients with and without diabetes. Santini et al. found considerable improvement in triglycerides, glucose profile, and insulin resistance but no significant changes in total cholesterol, LDL, or high-density lipoprotein (HDL) levels [ 12 ].
The SEs of LG were consistent with findings in previous reports. Our evaluation of LG's SEs showed that 67.1% did not experience any SEs, while 32.9% reported SEs. The most common SE was nausea, observed in approximately one in three participants. Other reported SEs included abdominal pain, vomiting, diarrhea, headache, dyspepsia, influenza-like symptoms, and constipation. However, these SEs are mild to moderate and transient with symptomatic treatment or dose reduction [ 26 ]. In our study, patients had nausea during the increase in dose, especially in the transition to 1.2-1.8 mg/day, and many of them were given only symptomatic treatment. Also, LG is not recommended for those with a personal or family history of pancreatitis and multiple endocrine neoplasia (MEN) 2A and 2B [ 27 ].
Despite the positive effectiveness in metabolic control and weight loss of LG, it continues to be an expensive treatment in our country. We think that GLP-1 analogs should be included in the scope of health insurance, considering that they will provide severe benefits in the fight against obesity in the future.
This study has several limitations. The first is that the study was single-center, retrospective, and small sample size. Secondly, it cannot provide long-term BW changes because the follow-up lasted only 16 weeks. Due to the cost of the medicine, the duration of use of patients is shortened, and long-term results cannot be evaluated. Finally, most of our study participants were women; thus, studies with a greater number of both genders are needed. | Conclusions
In recent years, medical treatments for obesity, in addition to lifelong adequate and balanced nutrition, physical activity, and behavioral therapies, have come forth. This study showed a clinically significant decrease in BW and improved cardiometabolic parameters in four and 16 weeks of treatment with LG. It stands as a safe and effective medical treatment modality for addressing the issue of obesity. Unfortunately, financial limitations in drug use remain a significant obstacle. | Background
One of the essential chronic diseases is obesity, which negatively affects the individual and society. Liraglutide (LG) is an effective treatment for both obesity treatment and metabolic control. This study aims to show the effect of a 3.0 mg dose of LG, injected subcutaneously once a day, on weight loss and metabolic parameters.
Methods
This retrospective single-center study included 67 patients (60 women and seven men) with a BMI of at least 27 kg/m 2 with comorbidities or a BMI of at least 30 kg/m 2 . Demographic characteristics, anthropometric measurements, and biochemical data of the participants were evaluated at the end of the four and 16 weeks.
Results
The mean body weight (BW) loss of patients using LG at 16 weeks was -11.71±2.21 kg. After the four and 16 weeks of beginning the LG use, the patients who lost more than 5% of initial BW were 38.8% vs. 76.1%, respectively (p=0.034). The mean baseline Homeostatic Model Assessment-Insulin Resistance Index, hemoglobin A1c, low-density lipoprotein, and triglycerides values were significantly higher than the 4 and 16 weeks (p<0.001). Twenty-two (32.8 %) patients experienced side effects (SE) after starting LG treatment, and the most common SE was found to be nausea (29.4%).
Conclusion
The use of LG, which is not covered by insurance, together with diet and exercise, has been shown to have clinically significant weight loss and a positive effect on glycemic values and lipid profile. | CC BY | no | 2024-01-15 23:41:58 | Cureus.; 15(12):e50544 | oa_package/9e/bf/PMC10787771.tar.gz |
||
PMC10787772 | 38222173 | Introduction
One of the prime reasons for patients seeking orthodontic treatment is improvement in their aesthetics or appearance. With a greater number of adult patients now opting for orthodontic treatment, the demand for aesthetic orthodontic materials has increased [ 1 ]. The patients desire to undergo orthodontic treatment without compromising their appearance during the treatment period. The increasing demand for more aesthetic orthodontic appliances has elicited an aesthetic revolution marked by the emergence of invisible appliances such as aesthetic brackets, lingual appliances, and clear aligners [ 2 ]. Ceramic brackets, clear aligners, and tooth-colored archwires are the new yardsticks for aesthetic orthodontic appliances. They are promising alternatives to conventional metallic materials that contain nickel for patients with nickel sensitivity [ 3 ]. Orthodontic archwires have significantly evolved from their original conception. Previously made of gold, they are currently made of various alloys such as stainless steel, nickel-titanium, copper-titanium, and other such alloys [ 4 ]. With the advent of ceramic and composite brackets, it was obvious for the archwires to change from their conventional metallic look to a more contemporary aesthetic look to enhance the patient’s appearance during the treatment period. The first esthetic transparent nonmetallic orthodontic wire known as Optiflex was made of a silica core, a silicone resin middle layer, and a stain-resistant nylon outer layer and was marketed by Ormco [ 5 ]. An archwire is usually replaced after four to eight weeks as per the schedule followed by orthodontists [ 6 ]. Thus, the wires need to sustain their aesthetic coating for at least 8 weeks before they are changed with the successive wire.
Based on preliminary research, only a few studies have been conducted (specifically in the American context). With the background of the recent coronavirus disease 2019 (COVID-19) pandemic and the popular role of strongly pigmented beverages that play an immunity-boosting role, studies exploring the effect of such beverages on orthodontic appliances may improve the decision-making process of selecting such aesthetic appliances [ 7 ]. | Materials and methods
Four brands of wires were included in this study. The wires were Teflon-, epoxy-, or ceramic-coated. Convenience sampling was done, and five samples of each brand were prepared to be tested in each solution. The samples needed to be in a tile form of 10 x 10 mm dimension, as the minimum size required for spectrophotometry is 8 mm diameter. Archwires of one brand were marked and cut into equal pieces that were 10 mm long. The ends of these pieces were approximated such that light could not pass through them. The approximated pieces were kept on a glass slab coated with petroleum jelly to prevent them from sticking to the glass slab. The ends of the wires were glued together using light cure composite and glue as shown in Figure 1 .
Three hundred ml of distilled water was used to prepare different solutions. After mixing and boiling, the solutions were cooled to room temperature and strained. A coffee solution was prepared using commercially available coffee powder (Nescafe) sachets. One teaspoon of coffee powder was added to 300 ml of boiling distilled water and stirred for uniform mixing as per the manufacturer’s instructions. Tea was prepared by adding commercially available tea powder. Two tablets of commercially available AYUSH kadha (Dhootapapeshwar dispersible tablet; Shree Dhootapapeshwar Limited, Punjab, India) were mixed as per instructions given by the manufacturer. Two vitamin C tablets (Limcee) were dissolved in 300 ml water and stirred well. A tablespoon of Chyavanprash was mixed in water at room temperature and stirred well. For making turmeric milk, one teaspoon of commercially available turmeric powder was added to boiling milk and allowed to cool.
All solutions were divided into four parts of 75 ml solution each using a measuring cylinder for staining four brands of archwire.
Before the specimens were immersed into the solution, the color of each sample was measured using the spectrophotometer and recorded as color at T0. The samples were immersed in their respective solutions for two weeks, four weeks, and eight weeks for 30 minutes each. Fresh solutions were supplemented every day.
Color measurement of the samples was done as follows.
Samples were tested at two, four, and eight weeks after immersing them in various solutions such as turmeric milk, AYUSH kadha, vitamin C tablets’ solution, coffee, Chyavanprash solution, and tea.
After the first measurement (T0), the samples were placed in a container with the prepared staining solution. Color measurements were repeated after two weeks(T1), four weeks (T2), and eight weeks (T3) of immersion in the solution. Before each measurement, samples were removed from the solution and rinsed with water. Excess water on the surfaces was blotted with tissue papers, and the samples were allowed to dry. Thereafter, the samples were subjected to spectrophotometric analysis. The spectrophotometer model used in this study was VITA Zahnfrabik H. Rauter GmbH & Co. KG, Germany, Sr No. H57127.
The samples were placed on a flat surface with a green background. The nose of the spectrophotometer was placed perpendicular to the center of the sample (Figure 2 ). The spectrophotometer automatically generated three measurements from which it calculated a mean color measurement which was seen on the spectrophotometer’s screen. Color changes were characterized using the Commission Internationale de I’Eclairage L*a*b* color space (CIE L*a*b*).
The ΔE value of each sample was thus calculated.
Color differences (ΔE*) were determined using the following equation:
ΔE= [(ΔL) 2 + (Δa) 2 + (Δb) 2 ] 1/2
Where ΔE = color difference between the respective samples before and after the intervention.
ΔL = differences in the 'L.' value [darkness (0) or lightness (100)]
Δa = differences in the "a' value [redness (positive a*) or greenness (negative a*)]
Δb = differences in the b value [yellowness (positive b*) or blueness (negative b*)]
L*, a*, and b* values before (T0) and after immersion at each time interval (T1, T2, T3).
To relate the amount of color change (ΔE *) to a clinical environment, the data were converted to National Bureau of Standards (NBS) units as follows: NBS units =ΔE* x 0.92.
The definitions of color changes quantified by NBS units were used. These values were suggested by Koksal and Dikbas as shown in Table 1 [ 8 ].
Statistics
A comparison of aesthetic degradation due to color changes among four brands of archwires was done by applying the one-way analysis of variance (ANOVA) test. The p values were calculated for all samples to determine whether the color change that occurred in the samples was statistically significant or not. Descriptive statistics were used to test the degree of color change using ΔE based on NBS units. | Results
Table 2 provides the descriptive statistics like mean and standard deviation for color change, i.e., ΔE obtained for four brands of wires dipped in six solutions for two weeks. At two weeks, the highest ΔE value of 26.92 (0.35) was noticed in the U Orthodontics (New Delhi, India) archwire after immersion in Chyavanprash solution (Table 2 and Figure 3 ). The least ΔE value of 1.87 (0.39) was observed in the Libral Traders (New Delhi, India) archwire group in a vitamin C solution (Table 2 and Figure 3 ).
Table 3 provides descriptive statistics like the mean and standard deviation for color change, i.e., ΔE, obtained for four brands of wire dipped in six solutions for four weeks. At four weeks, the color change intensified for all the archwires with a significant increase in ΔE.
JJ Orthodontics (Kerala, India) showed the ΔE 3.09 (0.27) in a vitamin C solution. Koden (Kerala, India) archwires showed more color change in a tea solution as compared to other archwires with ΔE 12.16 (0.23). Overall, the color change was less intense in the vitamin C solution and with Libral Traders archwires, whereas color change increased in the Chyavanprash solution and the U Orthodontics archwires (Figure 4 ).
The pairwise intergroup comparison at T2, i.e., four weeks, suggested that the difference in color change among various brands of archwires was statistically significant for most of the solutions. The result was statistically highly significant for all intergroup comparisons for AYUSH kadha, turmeric milk, Chyavanprash, and tea. There was no difference in color degradation between JJ Orthodontics and U Orthodontics archwires in the coffee solution. Libral and Koden had a similar amount of color change in the vitamin C solution as the p value was >0.05.
Table 4 provides the descriptive statistics like mean and standard deviation for color change, i.e., ΔE, obtained for four brands of wire dipped in six solutions for eight weeks (T3). At eight weeks, the color change intensified for all the archwires with a significant increase in ΔE (Figure 5 ). The color change was maximum in U Orthodontics archwires and Chyavanprash solution (Figure 5 ). The difference was statistically significant for all archwires in all solutions.
NBS values at the end of eight weeks suggested that almost all archwires showed ‘much’ difference in color. The vitamin C solution caused only appreciable color changes in the archwires as compared to the Chyavanprash solution, which led to ‘very much’ change.
All intergroup comparisons at the end of eight weeks (T3) indicated that changes produced by the vitamin C solution are not statistically significant for archwires. P value was <0.001 for all brand groups in the Chyavanprash solution except in JJ Orthodontics versus the Libral Traders group (Figure 5 ). Also, the color change among most groups of brands in the vitamin C solution was almost similar and thus statistically insignificant.
Overall results showed that none of the archwires resisted color change after being immersed in staining solutions after two, four, and eight weeks, respectively. | Discussion
Multiple studies have been conducted on the color stability of various archwires in different staining solutions such as coffee, tea, cola, wine, etc. The consumption of beverages in the Indian context is quite different and has considerably changed after the COVID pandemic. As per the guidelines given by the AYUSH Department of the Government of India, it was recommended to drink AYUSH kadha and golden milk (turmeric milk) once or twice daily to boost immunity [ 7 ]. Chyavanprash, which is composed of a highly concentrated mixture of nutrient-rich plants and minerals, was also suggested by the same guidelines, as it intends to boost immunity [ 9 ]. As the ingredients of these beverages tend to have a staining effect, our study aimed whether the aesthetic archwires maintained their color on consistently encountering these staining solutions.
Usually, the duration between two appointments to change archwires is four to six weeks [ 6 ]. Previous studies by da Silvaa et al. (2013), Deepika S et al. (2016), and Anand A (2020) measured the color change after three weeks [ 10 - 12 ]. This is the minimum duration that wires should resist color change before they are replaced. Hence, the time intervals of two, four, and eight weeks (T1, T2, and T3, respectively) were selected for our study.
Of the four brands of wires used in this study, two had Teflon coating (JJ Orthodontics and U Orthodontics) while one had ceramic coating (Koden) and one had epoxy coating (Libral Traders). Results showed that irrespective of the brand and coating, all archwires displayed a staining effect when immersed in different solutions. The finding that epoxy-coated archwires were more color-stable than Teflon-coated wires is consistent with the findings of the study conducted by Anand A (2018) who used red wine, orange juice, and mouthwash as staining solutions [ 12 ]. JJ Orthodontics showed the minimum color change as per the ΔE values in vitamin C as compared to the other wires. U Orthodontics showed less staining (6.86) than Libral Trades (7.39) in turmeric milk. U Orthodontics wires, which were Teflon-coated, showed maximum staining among all the archwires followed by JJ Orthodontics, which were also Teflon-coated. Libral Traders’ archwires resisted staining the most followed by Koden archwires. Thus, epoxy coating and ceramic coating seemed to have better stain resistance as compared to other coatings. Studies conducted by Anand A et al., Alsanea et al., and Ismail N et al. had similar conclusions [ 12 - 14 ]. Teflon-coated wires were more stained even when immersed in fluoridated and non-fluoridated mouthwashes as per the study by Hussein L et al. [ 15 ]. As compared to rhodium coating, epoxy-coated archwires had an almost equal color change value. Whereas rhodium archwires were superior to Teflon-coated archwires in maintaining their color stability [ 14 ]. According to all these experiments, the Teflon-coated aesthetic archwires were more prone to color changes when dipped in various dietary staining solutions. Teflon-coated archwires' higher propensity for a color shift may be caused by the production process [ 16 ].
With respect to the solutions (i.e. Chyavanprash, tea, coffee, turmeric milk, vitamin C solution, and AYUSH Kadha) included in this study, ΔE was observed over two, four, and eight weeks. Some studies conducted on ceramic brackets concluded that wine was the most staining solution in comparison with other staining substances such as mouthwash and cola drinks [ 12 ]. Studies by Mutlu-Sagesen L et al. and Ertaş E et al. concluded that coffee was the most staining solution in comparison with other staining substances such as tea and cola drinks [ 17 - 18 ]. Ismail N et al. suggested that adding milk to the preparation reduced the staining effect of coffee and tea and may reduce the concentration of staining pigments present in these solutions [ 14 ]. The studies mentioned above support the use of such solutions to test the color stability of archwires.
Although the present study did not statistically test the highest staining agent, vitamin C tablets dissolved in distilled water were observed to have a lower staining effect followed by AYUSH kadha tablet solution according to the ΔE findings. The reason for this observation could be that both these tablets were completely dispersible and thus showed minimum remnants on the wire surface and could be easily cleaned with tap water.
Since none of the previous studies included Chyavanprash solution, AYUSH kadha, turmeric milk, or vitamin C solution, their effect on archwires was a novel finding in the study. The staining of archwires was visible to the naked eye at all time intervals for all solutions, suggesting that none of the archwires could be the yardsticks for good aesthetic materials.
Limitations
This study explored the color stability of four brands of aesthetic archwires in six beverages. However, it also has some limitations that provide scope for future research. For example, more brands of wires can be incorporated in such a study. The in vivo evaluation of the color stability of archwires should be evaluated as the environment of the oral cavity may have a different effect on the staining of archwires. However, it may not be possible to monitor the effect of a single solution on an archwire if the study is conducted intraorally as other factors such as the patient’s oral hygiene and salivary flow can change the results.
Clinical significance
Based on the current research, it can be concluded that currently, epoxy-coated archwires could be preferred for a patient undergoing fixed orthodontic therapy with aesthetic brackets. With the background of the COVID-19 pandemic, vitamin C and Ayush kadha are suitable for the aesthetic preservation of archwires. | Conclusions
Our study tested the staining effect of six solutions on four different brands of archwires at two, four, and eight weeks. At the end of all time intervals, none of the archwires resisted a color change irrespective of the brand or coating of archwires. With respect to the solutions, all solutions, i.e. Chyavanprash, tea, coffee, vitamin C, turmeric milk, and AYUSH kadha, displayed a staining effect on all the aesthetic archwires. ΔE values suggest that there could be a difference in the degree of color change in the various staining solutions, the statistical significance of which can be investigated in future studies.
Since the consumption of beverages apart from tea and coffee used in this study is becoming popular worldwide, the results of this study can be implicated not only in the Indian context but also globally. Overall, this study provides a basis for further research, which includes more solutions and archwires to statistically determine the most aesthetically stable archwires. This can help clinicians guide their patients better in maintaining the aesthetics of their appliances throughout the treatment. | Introduction
One of the prime reasons for patients seeking orthodontic treatment is improvement in their aesthetics or appearance. With a greater number of adult patients now opting for orthodontic treatment, the demand for aesthetic orthodontic materials has increased. With the background of the recent coronavirus disease 2019 (COVID-19) pandemic and the popular role of strongly pigmented beverages that play an immunity-boosting role, studies exploring the effect of such beverages on orthodontic appliances may improve the decision-making process of selecting such aesthetic appliances.
Materials and methods
Four brands of wires and six beverages were included in this study. The wires were Teflon-, epoxy-, or ceramic-coated. Convenience sampling was done, and five samples of each brand were prepared to be tested in each solution. Samples were tested under a spectrophotometer after immersing them in various solutions for two, four, and eight weeks. A comparison of aesthetic degradation due to color changes amongst four brands of archwires was done by applying the one-way analysis of variance (ANOVA) test. P values were calculated for all samples to determine whether the color change that occurred in the samples was statistically significant or not.
Results
Overall results showed that none of the archwires resisted color change after being immersed in staining solutions after two, four, and eight weeks, respectively, which was found to be statistically significant.
Conclusion
At the end of all time intervals, none of the archwires resisted a color change irrespective of the brand or coating of archwires. This result was found to be statistically significant. With respect to the solutions, all solutions from Chyavanprash, tea, coffee, vitamin C, turmeric milk, and AYUSH kadha displayed a staining effect on all the aesthetic archwires. | CC BY | no | 2024-01-15 23:41:58 | Cureus.; 15(12):e50542 | oa_package/db/e5/PMC10787772.tar.gz |
||
PMC10787773 | 38218868 | Introduction
Fluid simulations enable the investigation of blood flow distribution in the cardiovascular system to better understand disease progression, inform surgical procedures and evaluate responses to internal and external conditions affecting the body. These simulations can also be used to reduce the risks associated with extreme environments, such as the microgravity experienced by astronauts during long-duration spaceflight where cardiovascular and muscular deconditioning can occur along with the development of a condition known as space associated neuro-ocular syndrome (SANS) 1 , impairing vision.
There are a number of cardiovascular-related changes that can arise during, or as a result of, long-duration spaceflight including large fluid shifts and stroke volume changes, variations in blood pressure, vascular tissue changes and orthostatic intolerance 2 . Terrestrially based research can be performed to emulate phenomena associated with spaceflight and to investigate the long-term implications on the body through methods such as long-duration head-down tilt (HDT) and water immersion experiments or parabolic flights. However, these experiments generally require extensive planning and are often associated with high costs due to duration and equipment requirements 3 . Furthermore, HDT experiments may not necessarily accurately emulate microgravity as they induce artificial pressure gradients, whilst parabolic flights can only provide short exposure windows of 20–30 s at a time 4 . Alternatively, computational fluid dynamics (CFD) simulations offer a relatively low-cost approach to model fluid changes associated with spaceflight, in either human or animal models. In addition, simulations are also advantageous in that they can use retrospective data and account for the varying sizes and scales of cardiovascular networks throughout the body.
Zero-dimensional (0D) lumped parameter modelling is often employed to model large-scale arterial networks across a wide range of conditions with relatively low computational cost. However, a drawback of low dimensional models is the inability to capture localised haemodynamics such as wall shear stress (WSS) distributions, or non-uniform flow through vessels due to geometric factors such as stenosis, bifurcations, tortuosity or high degrees of vessel curvature. Three-dimensional (3D) CFD simulations of the vasculature enable the evaluation of localised flow to a high degree of spatial and temporal resolution 5 , 6 . However, 3D CFD simulations often omit vascular networks upstream and downstream of a domain of interest due to increased computational cost. In place of an upstream geometry, inlet boundary conditions can be specified using measured data, existing literature or from 0D modelling. Downstream of a domain of interest outlet boundaries can be prescribed using zero pressure or specified flow split outlets. Alternatively, resistive elements, fractal trees or Windkessel modelling can be employed to emulate the downstream resistance and compliance of peripheral arteries and venous networks.
Lower dimensional fluid mechanics studies (i.e., 0D, 1D and 2D) have previously been used to model the arterial tree leading to the cerebrovasculature, or within the eye, in attempts to understand disease development such as glaucoma 7 , diabetic retinopathy 8 , hyper- and hypotension 9 as well as the effects of spaceflight or ground-based HDT experiments 4 , 10 . Studies have simulated blood flow within large arterial networks for the purposes of understanding pulse wave velocity propagation and age-related arterial stiffening 11 , arterial particle and embolism transport 12 , 13 , calculation of the ankle-brachial index 14 , effect of venoarterial extracorporeal membrane oxygenation 15 and demonstration of meshing strategies and computational optimisation 16 . Previous large-scale simulations have also encountered challenges in accurately evaluating localised haemodynamic metrics such as WSS due to computational limitations 11 . Development of a large-scale human artery haemodynamics framework enables verification with existing models of blood flow under conditions such as simulated microgravity 4 , 10 , 17 .
In this study, we aimed to construct a physiologically possible 3D model of human blood vessels ranging from the aortic root through to the retina, by combining existing subject-specific 3D models of arterial vessels from different sources. With this geometry, we developed fluid simulations that accommodated a single continuum physics model for modelling blood across a large arterial network, and gravitational effects to investigate the distributions of blood flow to distal small arteries, specifically in the eye. We used this framework to compare haemodynamic metrics at arterial regions of interest within this vessel network in response to Earth’s gravitational conditions and simulated microgravity. | Methods
Imaging methods and 3D reconstruction
This study was approved by the University of Western Australia (2022/ET000688). We sourced 3D arterial vasculature models from numerous past studies 18 – 22 . We imported the associated stereolithography files into STAR-CCM+ (v15, Siemens, Munich, Germany), where we used the surface repair tool to manually perform iterative rigid transformations to closely align each adjacent model using general imaging landmarks (e.g., aortic arch), combine selected overlapping vertices and then iteratively smooth the overlapping regions to achieve an average lumen (Fig. 1 ).
The superficial retinal arteriole section of the 3D model used data from a previous study 18 , which was reconstructed from a publicly available retinal fundus image of a healthy eye (CANON CF-60UVi) from the High-Resolution Fundus Image Database 23 . Briefly, the images were filtered and converted to binary processed images before manual segmentation of the arterioles and their diameter using open-source graphics editing software (GIMP, GIMP Team, California, United States). The resulting image was then segmented again in Mimics (v18, Materialise, Belgium) to create a 3D geometry and the centerline was extracted using 3-Matic (v10, Materialise, Belgium), before creating lofted cylinders along these centerlines and transforming the 3D model to a spherical curvature with a radius of 11.824 mm using a coordinate transformation in MATLAB (2016b, Mathworks, Massachusetts, United States). This model was duplicated and applied for both the left and right eyes, correcting for nasal and temporal orientation on either side. More detail on the image analysis and geometry creation is provided in Rebhan et al. 18 .
The 3D model of the cerebrovascular and neck arteries was obtained as part of a previous study 19 , where participants were imaged using 3 T time-of-flight magnetic resonance angiography (3 T TOF MRA) (Siemens Magnetom, Skyra) with a corresponding pixel size of 0.31 mm and a slice thickness of 0.75 mm. These images were reconstructed using in-house software to create a 3D isosurface, which was subsequently smoothed to within 5% of its starting volume and reconstruction artefacts were removed. While segmenting the cerebrovascular and neck arteries, a manual mask segmentation was created that followed the centre of the optic nerve sheath from the retina to the ophthalmic artery, representing the branching central retinal artery (CRA). This segmented cylindrical line was initially at the resolution of the 3 T TOF MRA images at approximately 0.3 mm in diameter. Upon importation into STAR-CCM+ for surface repair, we reduced the thickness of this line to 163 μm using local mesh smoothing techniques to match the average measured diameter of the CRA in healthy individuals 24 – 26 .
The 3D model of the aortic root and coronary arteries was obtained from previously reconstructed computed tomography (CT) coronary angiogram images from a recent study 20 , 21 which had a pixel size of 0.45 mm and slice thickness of 0.8 mm in Mimics. The aorta, iliac and femoral artery 3D model network was obtained from reconstructed and modified imaging data from a previous study 22 , which used continuous arterial phase CT imaging with a slice thickness and increment size of 2.5 mm and 1 mm, respectively.
Computational fluid dynamics
Simulations were developed in the commercial CFD package STAR-CCM+. In this study, we created two investigation cases; Earth gravity and simulated microgravity which both used the same 3D arterial geometry.
We used a combination of a trimmer cell mesh in the core of the fluid geometry and prescribed anisotropic prism layer cells to capture the behaviour of the fluid at the near-wall boundary. These prism layer cells were distributed in both the small and large arterial vessels with variable thickness to the regions of the retina, cerebrovasculature, neck and coronary arteries using volumetric controls. To ensure mesh independence we used the non-uniform refinement ratio formulation of the grid convergence index (GCI) 27 , 28 across a range of different haemodynamic parameters—with calculated GCI values for mass flow rate and WSS metrics falling below 2-3% indicating sufficient mesh discretisation 18 , 19 , 29 . Mesh sizes can be found in Supplementary Table 1 , settings in Supplementary Table 2 and GCI results in Supplementary Table 3 . The final mesh consisted of ~43 million elements.
Blood was modelled as an incompressible fluid with a density of 1050 kg m -3 30 . We assumed rigid walls with a no-slip boundary condition and a laminar flow regime, as this is expected in the majority of the fluid domain under normal healthy conditions. To capture the variation in blood viscosity due to both the non-Newtonian shear thinning nature as well as a reduction in viscosity due to the Fåhraeus-Lindqvist (FL) effect, a blended viscosity model was implemented. Using wall distance, we determined vessel diameter and prescribed the FL viscosity model 30 , 31 below vessel diameters of 0.6 mm 32 , the Carreau-Yasuda model as described by Karimi et al. 33 ( η ∞ = 0.0035 Pa s; η 0 = 0.16 Pa s; λ = 8.2 s; a = 0.64; n = 0.2128) in vessels greater than 1.2 mm, and linearly interpolated between these two models within this diameter range (0.6–1.2 mm).
For the gravity case, we used the mass flow waveform from Brown et al. 34 , which was prescribed in terms of a parabolic velocity profile at the aortic root. For each of the retinal arteriole outlets, we calculated outlet resistances using a structured asymmetrical fractal tree specific for retinal arteries as described by Malek et al. 31 , which is an extension of methods developed by those such as Olufsen 35 . Briefly, this method describes the branching of the daughter vessel radius from the parent vessel radius in terms of an exponent law and an asymmetry index, which allows for asymmetrical weighting of vessel branching. To calculate the resistances associated with a fractal tree network, a length ratio was assumed that varied depending on the branching vessel diameter. Vessel outlets that branched to a diameter below an assumed retinal capillary bed diameter of 4 μm 36 were assumed to have a pressure of 0 mmHg. The non-Newtonian behaviour of blood viscosity within small arterioles due to the FL effect is known to substantially affect upstream haemodynamics 37 . Consequently, we used an implementation of the FL viscosity model described by Liu et al. 30 , assuming a haematocrit value of 0.45, a plasma viscosity of 1.2 mPa s and blood density of 1050 kg m -3 30 . Resistance values calculated for each retinal arteriole outlet were then converted to a corresponding effective viscosity value using the Hagen-Poiseuille equation for incompressible laminar fluid flow within cylindrical pipes (Eq. 1 ). Where, for outlet i , μ i is the effective viscosity, R i is the fractal tree calculated resistance, and r i and L i are the radius and length respectively.
We then implemented each representative resistance effective viscosity value within a corresponding extruded outlet region specific for each arteriole. The extrusion length was set as double the outlet diameter and the distal surface was prescribed a zero-pressure condition, as per the assumption of prescribing a pressure of 0 mmHg at the capillary bed.
For each of the remaining arterial outlets outside of the retinal arterioles, we calculated the desired resistance for each outlet, which we then converted into a viscosity term using the same Hagen-Poiseuille Eq. ( 1 ) and applied this within an extruded outlet region. To do this, we assumed a pressure drop from systolic pressure to zero across each extruded outlet region (representing the pressure drop towards distal capillary beds) and used the corresponding systolic volume flow rate from the pressure and inlet waveforms respectively from Brown et al. 34 . The volumetric flow to each outlet was then scaled primarily by the percentage distribution of cardiac output to an overarching arterial region, and then secondarily using the corresponding outlet radius relative to the other outlet radii within the same arterial region using Murray’s law (Eq. 2 ). For the percentage of cardiac output to each region, we obtained values from literature of the estimated percentage of cardiac output to different arterial regions, which are summarised in Table 1 . Flow to the subclavian arteries was calculated from the residual of total cardiac output and paired arterial regions were assumed to have symmetrical distribution of flow to each. Where, for outlet i , R i is the calculated resistance, P sys and Q sys are the systolic pressure and flow values from Brown et al. 34 , CO split is the estimated percentage split of cardiac output to an arterial region as summarised in Table 1 , r i is the outlet radius and N is the number of outlets in a corresponding arterial region.
Each resistance outlet viscosity term was then implemented within a corresponding extruded outlet region, which was prescribed a length twice the outlet diameter and a zero-pressure boundary condition was imposed at the distal extrusion surface. For the gravity case, we prescribed the typical Earth gravitational acceleration of 9.81 m s –2 38 (1 g) acting inferiorly to emulate an upright position.
For the simulated microgravity case, we modified the inlet waveform from Brown et al. 34 to account for the effects observed during spaceflight. Cardiac output is generally reported to increase in response to microgravity, with documented increases of 10% 39 , 20% 40 , 41 as well as up to 30-40% 42 , and even in excess of 50% 1 , 43 . Heart rate is known to be relatively constant during spaceflight 44 , if not slightly decreased 45 , indicating that there is an increase in stroke volume. To model this change, we vertically scaled the waveform from Brown et al. until the stroke volume (and hence cardiac output) was increased by an assumed 20% 46 with the aim to emulate a moderate increase in pre-load from increased venous return as observed during spaceflight 42 , 46 . Furthermore, arterial resistance of the external iliac artery outlets (from Table 1 ) was increased by 93% to emulate the increased lower limb vascular resistance observed during spaceflight 47 relative to an upright position, while all other outlets remained at the corresponding Earth gravity resistances, as few other arterial networks have observed changes in resistance in response to microgravity 48 . Finally, we set the gravitational acceleration to 0 m s -2 .
Simulation execution
All simulations were solved using the finite-volume method within STAR-CCM+. We used the segregated flow solver and the implicit unsteady model with second-order temporal discretization, which uses the Semi-Implicit Method for Pressure-Linked Equations (SIMPLE) algorithm for coupling pressure and velocity. We used a time step of 0.001 s and inner iterations were terminated if normalised momentum and continuity residuals fell below 10 -4 . Simulations were run for 3 cardiac cycles, whereupon data was extracted over a final fourth cycle, which was sufficient for the difference in cycle averaged metrics (i.e., global WSS, ICA and VA flow, etc.) to remain below 2–3% between the final two cycles for both the control and simulated microgravity cases. Simulations were run on Magnus, a Cray XC40 supercomputer (Pawsey Supercomputing Centre, Perth, Australia) using 600 cores across 25 compute nodes, each providing 24 cores per node. Simulations required approximately 20,500 core hours to complete, equating to 34 h of run time.
Data extraction
For each case, we extracted mass flow rates leading to the cerebrovasculature and retina, maximal velocity waveforms within the CRA and M1 segments of the middle cerebral artery (MCA) and surface averaged time-averaged WSS (TAWSS), and oscillatory shear index (OSI) within the retina, Circle of Willis (CoW), carotid bifurcations, coronary and iliac arteries as well as within the ascending and descending aorta. Surface averaged data is presented as surface average ± surface standard deviation. | Results
Haemodynamic responses to simulated microgravity
Qualitative distributions of the relative change in TAWSS and OSI between control and simulated microgravity conditions across the continuous arterial geometry as well as detailed views of regions of interest can be seen in Fig. 2 . We extracted the surface averaged haemodynamic metrics across regions of interest throughout the entire 3D geometry (Fig. 3 ). Across both cases, absolute TAWSS was greatest in the CoW. The most substantial differences in TAWSS between gravity and simulated microgravity cases were found in the coronary arteries with increases of 41% (2.54 ± 2.74 Pa vs. 1.80 ± 1.97 Pa), in the left and right carotid bifurcations with increases of 36–37% (left, 3.72 ± 3.11 Pa vs. 2.74 ± 2.21 Pa; right, 4.14 ± 3.93 Pa vs. 3.02 ± 2.72 Pa), in the CoW with increases of 37% (6.04 ± 5.66 Pa vs. 4.40 ± 3.91 Pa) and within the left and right retinal arterioles with increases of 29-31% (left, 0.76 ± 2.27 Pa vs. 0.58 ± 1.92 Pa; right, 0.65 ± 2.37 Pa vs. 0.50 ± 1.99 Pa). Less substantial increases of 23% in the ascending (2.39 ± 1.08 Pa vs. 1.95 ± 0.91 Pa) and 17% in the descending (2.96 ± 5.17 Pa vs. 2.52 ± 4.05 Pa) aorta were found. In comparison, the TAWSS in the iliac arteries decreased by 4% (3.58 ± 2.30 Pa vs. 3.73 ± 2.35 Pa). Across both cases, absolute OSI was greatest in the ascending and descending aorta. In general, we observed a decrease in surface averaged distributions of OSI between gravity and simulated microgravity cases, with the largest decreases of –19% and –14%, respectively, in the left and right retinal arterioles (left, 0.010 ± 0.133 vs. 0.013 ± 0.137; right, 0.012 ± 0.132 vs. 0.014 ± 0.136), –16% and –10%, respectively, in the left and right carotid bifurcations (left, 0.127 ± 0.097 vs. 0.151 ± 0.107; right, 0.125 ± 0.107 vs. 0.138 ± 0.110) and –14% in the coronary arteries (0.109 ± 0.096 vs. 0.126 ± 0.096). Conversely, OSI in the descending aorta and CoW remained unchanged (0–1%), while a 7% increase was observed in the iliac arteries (0.066 ± 0.103 vs. 0.062 ± 0.110).
Head and neck artery response to simulated microgravity
In response to a 20% increase in cardiac output at the aortic root in the case of simulated microgravity, the computed velocities and flow rates within the cerebrovasculature were observed to increase (Fig. 4 ). A summary of waveform metrics are presented in Table 2 . We found increases in systolic and average mass flow rates under simulated microgravity conditions compared to gravity conditions. Within the M1 segments of the middle cerebral arteries, peak and average maximal velocity were observed to increase in response to simulated microgravity, preferentially on the left side. Average maximal velocity was 34% and 42% greater in the left compared to the right M1 segment in the gravity and simulated microgravity cases respectively. The CRA observed similar increases in peak and average maximal velocity in response to simulated microgravity. In general, the average velocity leading to the retinal arterioles was 9–10% greater in the left compared to the right eye across both gravity and simulated microgravity cases. Given the fixed CRA cross-section, we extracted average volumetric flow rates (average ± standard deviation) leading to the eyes, where we found the peak and average retinal blood flow increased by 32% (57.7 ± 4.1 μl min -1 vs. 43.8 ± 2.8 μl min -1 ) and 31% (9.9 ± 0.4 μl min –1 vs 7.6 ± 0.3 μl min –1 ), respectively, in the simulated microgravity case compared to the gravity case.
Retinal vasculature response to simulated microgravity
We calculated the mean of surface averaged haemodynamic metrics across the left and right retinal arterioles, which were distributed by corresponding vessel diameter (Fig. 5 ). In general, TAWSS was found to be greatest in the smallest arterioles (g: 0.71–1.12 Pa and μg: 0.92–1.44 Pa across 10–20 μm diameters), followed by in the larger diameter vessels (g: 0.60–0.66 Pa and μg: 0.78–0.86 Pa across 90–110 μm diameters). Relative to the gravity case, TAWSS increased uniformly across all diameter bands in response to simulated microgravity (29-30%). OSI was almost uniformly distributed across small to large arterioles (g: 0.012–0.013 and μg: 0.010–0.011 across 10–130 μm diameters). The oscillatory shear decreased with increasing diameter in the larger arteriole vessels above 140 μm, irrespective of exposure condition. OSI also decreased in response to simulated microgravity across all diameter bands (–12–19%). | Discussion
Since the beginning of spaceflight, there has been interest in understanding the effects of space travel on the human body. While the implications of microgravity on muscle mass and strength have been investigated in both animal models 49 and humans 50 , the effect on blood flow and arterial biomechanics is less understood. Some studies have examined blood flow stasis and thrombosis using ultrasound and revealed important flow abnormalities during microgravity 51 . Others have used computational modelling to replicate the haemodynamic effects of pressure changes and weightlessness 52 . In this study, we investigated the effects of simulated microgravity on vascular biomechanics in a large three-dimensional model of the arterial system, contiguous from the heart through to the eye.
To achieve this, we combined 3D models from different imaging modality data to develop a large 3D model indicative of an arterial blood flow network, similar to methods used previously for 3D geometries spanning from the lower limbs to the CoW 11 , 15 . We developed a simulation framework for CFD analysis with continuous physical fluid characteristics and fractal tree and resistance outlets which were applied to this large 3D geometry. We then implemented the rudimentary effects of simulated microgravity in the arterial system. This work serves as an interesting proof of concept for future research that may seek to investigate the effects of physiological stimuli on large interconnected arterial networks.
There is considerable interest in the reported vision loss associated with SANS due to prolonged spaceflight. Computer simulations may help reveal some of the biomechanics that may be contributing to the development of SANS 53 . Salerni et al. 10 constructed a 0D model of the cerebrovasculature, retinal and choroidal vessels, incorporating the effects of changes in aqueous humour and cerebrospinal fluid flow, compression of the lamina cribrosa and the osmosis of fluid at the blood-brain barrier (BBB) in response to simulated microgravity. Although the focus of their study was the investigation of different oncotic pressures and their influence on intraocular and transmural pressures, they found that the parallel configuration of the retina and choroid chambers resulted in increases in normalised retinal flow of approximately 5% for the simulated microgravity case with a weakened BBB, while parallel flow in the choroid and the ciliary body gradually decreased. Although our simulation did not incorporate the effects of intraocular, oncotic or intracranial pressures, we did find that the increase in cardiac output and change in gravitational field in simulated microgravity conditions increased retinal blood flow relative to upright Earth gravity conditions. Within the eye, similar changes to those calculated in our simulations have also been observed during spaceflight. Using colour Doppler ultrasound, Sirek et al. 54 measured changes in peak systolic velocity in the CRA before, during and after spaceflight. From a database of 14 astronauts, they found an average increase in velocity of 36.1% combined across the left and right eyes from pre-flight values to inflight values, similar to the increases in CRA peak velocity calculated in our study (30-31%). The slightly lower relative changes in velocities in our study may be explained by the assumption of rigid geometry, as Sirek et al. also observed an 11% increase in optic nerve sheath diameter, which may compress the CRA, decreasing its diameter resulting in increases in velocity 55 ; this was not accounted for in our model. Interestingly, ground-based experiments have found even greater increases in retinal blood flow, with Laurie et al. 56 measuring CRA velocity increases of 43–48% in HDT and HDT with hypercapnia compared to seated measures.
An interesting question that arises from these findings is what elevated flow, and therefore shear stress, may mean in the context of the retina and conditions such as SANS. Elevation of shear stress to 2 Pa in bovine retinal endothelial cells has been previously shown to significantly increase retinal endothelium permeability by up to a factor of 14 57 . Higher shear stress conditions (> 1 Pa), as opposed to low-moderate shear (0.1–0.5 Pa), have also been associated with pro-inflammatory responses and barrier dysfunction in human retinal endothelial cells 58 . Higher vascular permeability may not necessarily pose a risk to osmotic balance at the vessel wall provided albumin transport is matched 59 , however, albumin concentration has been found to be significantly lower in astronauts 60 , 61 . Evidence of retinal endothelial cell dysfunction has been observed previously in mice flown on the international space station (ISS), which exhibited significantly higher retinal endothelial cell apoptosis compared to both Earth controls, as well as to mice that also flew on the ISS while in a centrifuged habitat that produced an effective 1 g of artificial gravity 62 . Our results show that an assumed increase in cardiac output of 20% due to emulating the increase in pre-load from increased venous return 42 , 46 during simulated microgravity may result in up to a 30% increase in WSS in the retina, and where higher shear stress is primarily distributed in the smaller arterioles. Although an assumed value of 20% was used in this study 40 , 41 , 46 , long-duration spaceflight between 3–6 months has reported the possibility of greater increases in cardiac outputs between 35-41% 42 , with some estimates as high or in excess of 50% 1 , 43 . As discussed in later sections, if the assumed cardiac waveform in our study is lower than what may be the case for an individual, a higher baseline cardiac output coupled with the corresponding increase in shear stress in the retina attributed to the microgravity environment (which may be a potentially greater increase in cardiac output than the assumed 20%) may predispose these vessels to possible endothelial dysfunction and leakiness, subsequently contributing to the development of oedema in and around the retina. Recent research has postulated a multi-hit hypothesis to the progression of SANS, whereby any oedema caused by endothelial dysfunction may impact the outflow of cerebral spinal fluid which is already impaired in space 63 —in turn contributing to the pressure accumulation on the posterior eye 40 . Nonetheless, given the heterogeneous and non-individual-specific nature of the underlying 3D models used in this study, and the mirroring of retinal vasculature about the left and right sides, these results provide only the initial trends of the shear stress related responses to simulated microgravity in the eye. As such, the remaining pathophysiology of this condition remains highly complex and is likely contributed to by a multitude of factors, each requiring subsequent investigation.
Simulated blood flow to the brain has also been reported by Gallo et al. 4 . In comparison to our findings where blood flow to the cerebrovasculature increased in response to simulated microgravity, they reported general decreases in blood flow in regions throughout the body, such as in the vertebral (–17%) and internal carotid (–19%) arteries. However, their study purposely compared the simulated microgravity results to a reference supine condition. Hence, many of their findings are the inverse to the results in our study, where we observed increases in vertebral and internal carotid artery flows between 21–28%, due to our gravity condition being in an upright reference position. Nevertheless, as suggested by the authors, results of increased flow upon exposure to simulated microgravity would be expected relative to an upright condition 4 , 64 . Ground-based emulated microgravity research of neck artery flow by Ogoh et al. 65 measured flow leading to the cerebral vasculature after 57 days of –6 degrees HDT rest as an analogue for prolonged spaceflight. Relative to pre-HDT rest in a supine condition, they observed an average reduction in blood flow in the ICA after 30 (–23%) and 57 days (–15%), while the vertebral arteries remained unchanged, resulting in the vertebral arteries carrying an increased proportion of the cerebrovascular flow relative to pre-HDT. Although we report results relative to an upright condition (and as indicated, our results appear inverted as relative to upright increases compared to relative to supine reductions) we found greater changes in average flow in the vertebral compared to internal carotid arteries, reflecting an increase in proportional cerebrovascular flow in the vertebral arteries between the simulated microgravity and gravity cases.
Interestingly, in general, we also observed higher flows in the left arteries leading to the cerebrovasculature compared to the right, which is also reflected in the flows leading to the eye in the central retinal artery. Although possibly anatomically specific to an individual, this may be due to the left carotid and vertebral arteries originating from branches either closer or directly from the aortic arch, compared to the right side branching from the brachiocephalic artery. Interestingly, naturally higher left-side flow has previously been observed, particularly between vertebral artery sides 66 , which is consistent with the findings of our study in that left vertebral artery flow was substantially higher than that of right vertebral artery flow. Interestingly, though warranting further investigation, additional preferential arterial flow to the left side of the cerebrovasculature may potentially contribute to the findings of additional flow stasis observed in the left jugular vein during spaceflight, which is less pronounced on the right side 67 – 69 .
Blood flow simulations of isolated 3D geometries leading to, and within the cerebrovasculature, under different gravitational loadings have also been performed previously. Kim et al. 17 simulated blood flow through compliant carotid bifurcation and CoW 3D geometries, as well as incorporating an autoregulatory mechanism at the arterial outlets, to investigate the changes observed in response to spaceflight. In the carotid artery bifurcation, they found that in order to maintain consistent blood flow to the outlets as per their autoregulation algorithm, the carotid diameter in the simulated microgravity case increased by 6.2% relative to the upright gravity case. As a result, distributions of TAWSS between the cases were observed to decrease almost uniformly under simulated microgravity relative to upright. Similar changes were also observed leading to and within the CoW, with diameter increases in the ICAs (3%), basilar (4.4%) and MCAs (6.9%) under the simulated microgravity case relative to upright. Similar changes in TAWSS were observed with almost uniform decreases in all regions of the CoW and proximal arteries. In comparison, our simulations used a rigid wall boundary condition preventing vessel wall change, and consequently yielded the inverse result, with almost uniform increases in surface averaged TAWSS across the upper body regions of the 3D geometry.
Within the brain itself, MCA velocity changes have also been observed in response to spaceflight microgravity or terrestrial microgravity emulation. In response to HDT and HDT with induced hypercapnia, Laurie et al. 56 measured increases in average MCA velocity of approximately 20%, an increase similar to the results found in our study (20–28%). In comparison, cerebral blood flow measured in four astronauts after 1 and 2 weeks in space 70 was found to elucidate non-significant changes in MCA average velocity relative to the pre-flight measurement. However, 1 astronaut did observe a substantial increase in average MCA velocity at the 2-week mark, an increase of approximately 28% relative to pre-flight measurements. The lack of change in MCA velocity could indicate cerebral autoregulation acting for this duration of spaceflight, though this may not necessarily occur across all individuals. Similar non-significant changes in MCA average velocity have also been observed during parabolic flights (15 bouts of 20 s of parabolic freefall), where small average increases (4%) were observed across 16 participants 71 . In comparison to these findings, Iwasaki et al. 72 found that MCA blood flow velocity in 11 astronauts pre and post-spaceflight (between 3–6 months prior to the flight and within 3 days of landing), for either supine or sitting measurements significantly increased by between 10–13%. The blood flow velocity was then observed to reduce to pre-flight levels after a recovery window of between 1–6 months of landing. This increase in MCA velocity is less substantial than the changes observed in our study, however, this could be attributed to the study subjects returning to Earth for imaging, where the reintroduction of Earth’s gravitational vector may have influenced cerebral blood flow within the first 3 days of landing.
The question remains, however, what these changes in cerebral flows as per the findings of our study, and similarly to others, mean for individuals in microgravity. Although postural changes may result in brain blood flow increasing or decreasing throughout a day within autoregulatory bounds, our results show that simulated microgravity results in a constant increase of 20–30% in brain blood flow, and close to 40% increases in shear stress within the brain. This may have consequences in that increased perfusion of the brain may lead to exacerbated autoregulatory responses resulting in prolonged vessel dilation, reduced myogenic response, which is consistent with mouse spaceflight models 73 , decreased cerebrovascular resistance and consequently any increased pressure acting on the cerebral endothelium could result in oedema 74 . Typical time-averaged shear acting across the sensitive BBB within the cerebrovasculature is in the range of 0.3–3 Pa 75 , and where moderate shear stress within this range has been found to be beneficial to the barrier function of cerebral endothelial cells 76 . However, severely elevated pulsatile shear stress (> 4 Pa) has been associated with the downregulation of BBB tight junction markers, impeding endothelial cell contact 77 . Our findings show that simulated microgravity serves to substantially increase the shear stresses acting both within the cerebrovasculature and retinal vessels. Coupled with additional blood throughput potentially leading to venous stasis and congestion 1 , 78 , our findings are consistent with causes of fluid oedema associated with exposure to microgravity, which is often observed as a key contributor to the pathogenesis associated with the development of SANS. Nonetheless, future work is required to improve the understanding of the development of SANS as well as the clinical implications of constant elevated flow to the brain.
There is limited data on the effects of microgravity on the coronary arteries, in particular shear stress. We found that TAWSS in the coronary arteries increased in response to simulated microgravity, although both gravity and simulated microgravity case values fell within the normal and atheroprotective shear stress range of 1–7 Pa 79 . Although anecdotal, this finding may support NASA data reporting that, when compared to healthy terrestrially based control populations, astronauts following spaceflight do not have increased differences in cardiovascular and coronary artery disease or standardised mortality 80 , 81 .
Various in vitro cellular model and in vivo animal model studies have been used to investigate the functional effects of emulated microgravity on endothelial cells and arteries. Hindlimb unloading (HU) is an animal model technique involving the suspension of rodents to create a downwards head tilt and pressure gradient across the body, similar to head-down tilt (HDT) in humans. Despite minimal morphological changes 82 , functional changes such as vasoconstriction and relaxation responses in young HU rat abdominal aorta samples have been found to be reduced relative to control rats 83 . Similar diminished vasoconstriction responses have also been observed in the mesenteric arteries of HU rats 84 as well as in mice that have flown in space 85 . Alternatively, Shi et al. 86 found that cultured human umbilical vein endothelial cells experiencing 24 h of emulated microgravity conditions using a clinostat upregulated endothelial nitric oxide synthase, increased cell migration and promoted angiogenic pathways. Similar findings of increased endothelial cell migration and nitric oxide production have been observed by Siamwala et al. 87 after 2 h of similarly emulated microgravity. Increases in endothelial nitric oxide synthase have also been observed in the aortas of HU mice 88 . In our study, surface averaged TAWSS (1.95–2.52 Pa) across the aorta increased by 17–23% in the simulated microgravity case. Given the mechanoactivation of endothelial nitric oxide synthase is associated with higher shear stresses 89 , haemodynamic responses to simulated or emulated microgravity may induce, on average, somewhat favourable endothelial conditions in larger arteries, such as the aorta, and contribute to any reductions in vasoconstriction.
Blood flow changes in the lower limbs have also been investigated previously from spaceflight data, simulations and ground analogue experiments. Gallo et al. 4 implemented a large 0D-1D model, combining a 1D arterial tree with 0D representations of circulatory regions and baroreceptor mechanisms to understand the deconditioning of the cardiovascular system during long-duration spaceflight. Compared to upper body flow, they calculated smaller decreases in flow to the lower limb regions of the inner iliac (-2.27%) and femoral (–4.87%) arteries in response to simulated microgravity. Although again, the changes in flow are inverted compared to ours due to using a supine position as their relative condition. Despite this, we also observed a greater proportion of flow distributed to the upper body compared to the lower limbs. After 5 weeks of HDT, a study by Palombo et al. 90 found that the diameters of the femoral artery were significantly reduced, while non-significant reductions in wall shear rates were measured with ultrasound at the near (–2%) and far (–9%) walls. In our study, we calculated similar small decreases in TAWSS across the iliac arteries (–4%) in response to simulated microgravity, while interestingly the absolute values of TAWSS remained higher compared to the upstream regions such as those in the aorta. Nonetheless, this decrease in shear stress reflects a reduction in blood flow towards the lower limbs in response to simulated microgravity, which is consistent with decreases (though reversible after one month of Earth gravity) in superficial blood flow that has been observed and measured pre and post flight in the lower limbs of astronauts 91 . Prolonged reductions in flow to lower limbs may have implications for the metabolic health in these regions, particularly given the documented musculoskeletal wasting that occurs in space with the reduction in gravitational loading 50 , 92 . Furthermore, prolonged reductions in perfusion to the lower limbs may present additional risks in the context of the development of peripheral artery conditions or disease, which are generally characterised by reductions in perfusion and ischaemia in these regions 93 . Promising potential countermeasures include lower body negative pressure devices, which serve to counteract upward fluid shift and redistribution by introducing negative pressure about the lower limbs 94 . These devices may also serve as a potential countermeasure for the pathophysiological development of SANS, which is suspected to be caused, at least in part, by this fluid gradient and redistribution of fluid throughout the body 94 .
Despite differences in exact geometry, dimensional representation or environmental conditions, and variation in demographics associated with imaging sources, we observed some similarities between our results and existing large-scale simulation networks. Blanco et al. 95 developed a 1D anatomically detailed arterial network model consisting of over 2,000 arterial vessels using a 3D circulatory representation as a geometrical substrate. Although the inlet flow rate in their model was greater than the inlet flow rate used in our gravity case, the average flow calculated across the VAs matched our results to within 2%, though the average flow in the ICAs was 25% greater in their study compared to ours. Xiao et al. 11 developed a simulation of a 3D deformable full-body arterial network consisting of arterial vessels ranging from the tibial artery to the CoW. Using diameters and flow data provided in their work, the average velocity in the left middle cerebral artery was found to be 54% greater than the same artery in our gravity simulation. However, their inlet flow waveform was substantially higher, with a systolic flow rate approximately double that used in our gravity case simulation. Xiao et al. also calculated and presented the shear stress throughout the entire geometry but highlighted that the mesh used was insufficient to ensure grid independence in the WSS fields and that the results were only to provide an indication of capability. Our simulation framework demonstrates that this resolution is achievable to capture the WSS and associated haemodynamic metrics.
Nonetheless, despite serving as an initial proof of concept study for investigating continuously connected arterial networks in response to environments, such as simulated microgravity, the methods proposed in this work are not without limitations and need for future development.
Firstly, as we used a mixture of 3D model data from different imaging modalities and sources across different subjects, the 3D model developed does not represent a single individual. Consequently, the underlying 3D model heterogeneity may influence the absolute data reported, such as the distributions of surface averaged shear stress or the amplitude of velocity waveforms. As such, although absolute data is reported for reference, the findings from this study aimed to focus on the relative changes and trends offered by rudimentary simulation of microgravity, whereby any systematic heterogeneity effects may be nullified through cancellation given the use of the same 3D model in both Earth gravity and simulated microgravity cases. Nonetheless, by incorporating real imaging data, the model does at least represent a continuous human arterial network that is physiologically possible, albeit not singularly subject specific and formed from sources with varying demographics. Future work using the methods and approaches described in this study would ultimately use individual-specific imaging data for the construction of the continuous arterial network 3D model. One key benefit of this approach, however, is that (as demonstrated in our study) retrospective imaging data can be combined to form the 3D arterial network. Consequently, future work with individual astronaut data would potentially be feasible—enabling greater insights into the haemodynamics occurring throughout a large proportion of the arterial cardiovasculature in the spaceflight environment. Alternatively, subject-specific data could be combined to understand how individuals with pre-existing cardiovascular risk factors may be predisposed in environments of varying gravitational load, such as during spaceflight, on Martian or lunar surfaces, or during the acute hyper-gravity associated with planetary exit or re-entry.
Secondly, we used a rigid wall model that did not account for the movement of the arterial wall. Although astronauts undergoing 6-month duration spaceflight have been observed to experience the equivalent of 10–20 years of arterial aging and stiffening 96 , this remains a limitation given vessels are inherently compliant in healthy populations, of which would be the case in astronauts currently undertaking spaceflight missions. Alternatively, wall movement could be achieved in future work using fluid-structure interaction (FSI) modelling. This was not performed, however, in this initial study given the significant additional computational load required for FSI simulation, computational stability and interfacing requirements across such a large arterial domain, and as the arterial wall thicknesses and tissue material properties were either unable to be resolved or known, respectively. Furthermore, in future work that would ideally use individual-specific imaging, obtaining wall thickness and tissue material properties would either be impossible, severely invasive or limited to only the larger arteries. Additionally, these methods would need to account for vessel pretension embedded in the 3D reconstructions from vessel imaging being captured, which represents vessels already at arterial pressure. Despite rigid wall modelling not accounting for the deformation fluctuations experienced throughout the cardiac cycle and instead representing a snapshot in time, comparable distributions of WSS between rigid and FSI simulations have previously been observed, although rigid wall simulations generally overestimate instantaneous WSS compared to FSI 34 , 97 . Surface and time-averaged metrics, such as those used in this study, have also been observed to be similar between rigid wall and FSI methods 98 .
Thirdly, for all arterial outlets except for the retinal arterioles, we used a mixture of known estimated flow splits to arterial networks and then Murray’s law to estimate the distribution of flow throughout numerous arterial outlet regions. This approach was adopted due to the simplicity of implementation, but it limited the simulations in terms of accounting for any changes in vessel compliance or autoregulatory constrictions/dilations. Alternative outlet modelling approaches that should be considered include using varying power law exponents based on literature or modelling distal resistance and compliance using multi-element Windkessel models, which may enable the incorporation of autoregulatory mechanisms in the cerebrovasculature as well as account for the effects of venous stasis and congestion that are generally associated with the spaceflight environment 1 , 78 . Additionally, MRI or ultrasound methods could be employed to measure subject-specific regional flow distributions, as opposed to the assumed values as described in Table 1 .
Fourthly, the cardiac output of the inlet flow condition adapted from Brown et al. 34 may be inadequate for the geometry developed, which is reflected in the variation in results for the gravity condition with other large arterial simulations. Future work should aim to use subject-specific measured cardiac output, which could be obtained using MRI or duplex ultrasound methods. Alternatively, in the absence of cardiac output data, a parametrically swept range of different cardiac outputs could be investigated. However, as the goal of this study was to provide an initial framework for comparing relative changes in response to simulated microgravity in a large 3D continuous arterial network, the relative changes were found to be somewhat consistent with emerging microgravity research and measured data. Additionally, while we incorporated the aortic root as part of the geometry, we implemented the flow at the aortic valve surface as a simplified parabolic velocity flow profile with a fixed orifice area, neglecting the natural helical and three-dimensional nature of blood flow ejected from the aortic valve.
Finally, we assumed the flow to be within the laminar flow regime, which is a common approach in arteries outside of the aorta, however, turbulence is likely induced within the ascending aorta due to the high ejection velocities at the aortic valve as well as during the deceleration phase of the cardiac cycle. As a key region of interest in this study were the vessels leading to and within the eye, which are known to exhibit mostly laminar flow, this regime was considered appropriate and any turbulence generated at the aortic root was assumed to have minimal effect on reported haemodynamics in these regions. Modelling using large eddy simulation (LES) may be more appropriate in future studies, particularly to investigate changes in haemodynamics within the aorta and nearby larger arteries, though this was not performed due to the increasingly high computational cost associated with this modelling approach.
In this study, we aimed to demonstrate that large-scale 3D arterial networks can be constructed across a wide range of vessel calibres from 3D models derived from numerous image datasets and that the resulting geometry can be used to understand the change in haemodynamics in response to simulated microgravity. From our simulations, we found similarities with existing spaceflight simulation models and measured data—specifically that blood flow and shear stress decrease towards the lower limbs and increase towards the cerebrovasculature and within the eyes in response to simulated microgravity exposure relative to an upright position in Earth gravity. This framework may also prove useful to simulate the changes in haemodynamics in other equally challenging environments influencing the cardiovascular system. | We investigated variations in haemodynamics in response to simulated microgravity across a semi-subject-specific three-dimensional (3D) continuous arterial network connecting the heart to the eye using computational fluid dynamics (CFD) simulations. Using this model we simulated pulsatile blood flow in an upright Earth gravity case and a simulated microgravity case. Under simulated microgravity, regional time-averaged wall shear stress (TAWSS) increased and oscillatory shear index (OSI) decreased in upper body arteries, whilst the opposite was observed in the lower body. Between cases, uniform changes in TAWSS and OSI were found in the retina across diameters. This work demonstrates that 3D CFD simulations can be performed across continuously connected networks of small and large arteries. Simulated results exhibited similarities to low dimensional spaceflight simulations and measured data—specifically that blood flow and shear stress decrease towards the lower limbs and increase towards the cerebrovasculature and eyes in response to simulated microgravity, relative to an upright position in Earth gravity.
Subject terms | Supplementary information
| Supplementary information
The online version contains supplementary material available at 10.1038/s41526-024-00348-w.
Acknowledgements
This work was supported by resources provided by the Pawsey Supercomputing Centre with funding from the Australian Government and the Government of Western Australia. HC is supported by a Forrest Research Foundation Scholarship and an Australian Government Research Training Programme Scholarship at The University of Western Australia. DG is supported by a National Health and Medical Research Council Principal Research Fellowship (APP1080914). We would like to acknowledge Prof Natzi Sakalihasan (University of Liege) and Prof Carl Schultz (University of Western Australia) as we used 3D models derived from medical images acquired as part of their independent research.
Author contributions
All authors contributed to the study conception and design. 3D file generation and simulation preparation: H.C., L.K. and L.P. Simulation execution and data extraction: H.C. Data analysis: H.C., L.K., L.P., D.G. and B.D. Manuscript drafting: H.C. All authors reviewed, commented and edited each iteration of the manuscript. Supervision: D.G. and B.D. All authors read and approved the final manuscript.
Data availability
Data not directly presented in the article, such as geometries and simulation files, can be made available on reasonable request to the authors.
Code availability
The fluid simulation code used in this study is commercially available.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:58 | NPJ Microgravity. 2024 Jan 13; 10:7 | oa_package/12/4a/PMC10787773.tar.gz |
|
PMC10787774 | 38218995 | Introduction
Millimeter wave (mmWave) has attracted huge interest in wireless technologies, in particular for the fifth generation of mobile communication (5G) applications in recent years. The demand for massive data rates and low latency in 5G systems can be addressed with the large available bandwidth in mmWave frequencies. The Advanced Antenna System (AAS) is a recommended necessity for 5G applications in mmWave frequencies. AAS employs a large antenna array with beamforming capability to improve the performance and spectral efficiency of 5G systems 1 . AAS requirements for 5G applications from the latest standards can be found in Ref. 1 in terms of frequency bands, antenna element properties, array configurations, and beamforming characteristics. The 3rd Generation Partnership Project (3GPP) unites well-known standard organizations to develop protocols for mobile telecommunication systems 3 . Some standard mmWave frequency bands for 5G systems are 1 – 3 : n257 : 26.5–29.5 GHz; n258 : 24.25–27.5 GHz; n261 : 27.5–28.35 GHz (subset of n257 ).
The single antenna element is recommended to be dual-polarized with the half power beam width of 65° 4 . The 5G antenna array known as AAS utilizes adaptive beam forming, multiple input multiple output (MIMO), and Spatial Division Multiple Access (SDMA) 1 , 2 . The main requirements of AAS for 5G mmWave applications are presented in Table 1 .
Beamforming is a key technique in AAS that concentrates the power toward the desired direction and nulls the undesired directions. Beamforming can be implemented as analog, digital, and hybrid configurations. Analog beamforming is easy to implement but has a limited number and characteristics of fixed beams. Digital beamforming employs a separate RF chain for each array element, leading to a complicated structure, but provides very flexible and efficient beamforming. Hybrid beamforming utilizes both analog and digital beamforming as each RF chain is associated with multiple antenna elements 5 , 6 . The hybrid beamforming seems to be more efficient since the beamforming is performed in the analog domain using fewer RF chains.
It is clear that analog beamformers represent an important component of hybrid beamforming systems. Several different analog RF beamforming networks have been proposed mostly following the Butler matrix and Rotman lens topologies. The Butler matrix is a microwave network as the analog implementation of the fast Fourier transform including couplers and phase shifters. The phase shifts at the output ports can be determined by a combination of the phase shifts of all the signal paths. The Rotman lens is a scanning system that can be used in various systems as a fundamental multiple-beam antenna. In a Rotman lens, the required phase distribution on the antenna ports is achieved by true time-delay (TTD) through a shaping path and maintains a constant time delay over a broadband frequency range of operation 7 .
The Butler matrix is developed by integrating the couplers, phase shifters, and crossovers exhibiting significantly narrower bandwidths than most wideband antenna arrays. Rotman lenses are passive beamforming networks by implement true time-delay with relatively wideband performance 8 .
The main advantages of Rotman lenses compared to Butler and Blass Matrices are lower weight and hardware cost while having wider bandwidth and beam steering. Therefore, a Rotman lens is suitable for applications that require both a large scan of the radiation pattern and wide frequency range coverage 9 .
In Ref. 10 , a 5 × 8 Butler matrix operating at the frequency range of 27.8–30.8 GHz is presented where output signals with equal power divisions and five differential phases can be obtained. However, this beamforming network does not provide continuous beams.
A 16-element antenna array with a Butler matrix covering the band 26–31.4 GHz and ± 42° beam switching is presented in Ref. 11 . The maximum gain is 12 dBi with − 19 dB SLL, and 9 dBi with − 8 dB SLL for the beams with ± 13° and ± 42°, respectively.
In Ref. 12 a wide angular Rotman lens operating in the 28 GHz band is proposed. The 6-port Rotman lens is connected to eight linear antenna arrays consisting of five series-fed rectangular patches. The angular scan is from − 60° to 60° with a variation of less than 8 dB. However, the bandwidth is low and the gain variation is high.
In Ref. 13 a PCB-based Rotman lens consisting of an eight-element Yagi–Uda antenna array at 28 GHz is demonstrated. The proposed beamformer operates across 25.5–28.5 GHz with 7 switchable beams covering ± 30° with a realized gain of up to 9.4 dBi. However, the scan loss for the side angle (± 30°) is relatively high (4.5 dB).
With the extensive review of the literature on the mmWave Rotman lens topic, the following major problems in the existing designs can be recognized: Lack of Rotman lens design with wide-angle scanning around ± 60° over the wide frequency bandwidth. High scan loss for wide-angle beam compared to the central beam. Low SLL for wide-angle beam less than 10 dB. Lack of integration of the antenna with the beamforming network.
On the other hand, the Rotman lens beamforming network suitable for 5G AAS needs to be in line with the requirements indicated in Table 1 . Therefore, the design of a Rotman lens with a minimum of 8 antenna elements with possible dual-polarization capability in a wide operational bandwidth over 5G frequency bands (24–30 GHz) for the scanning coverage of ± 60° and SLL > 10 dB is demanded by industry 1 , 2 .
In this research work, a novel wide-angle Rotman lens beamformer is developed to meet the AAS requirements for 5G applications. An exhaustive design methodology for the Rotman lens is extracted comprising different components of the Rotman lens including parallel-plate contour, beam ports, array ports, and dummy ports. For the antenna elements, an end-fire Vivaldi antenna in which the direction of radiation is along the line of the antenna is suggested for easy integration with the beamformer and facilitates dual polarization implementation as well as possible stacking to obtain 2-D beamforming. The beam ports and non-uniform array ports are designed in detail to accommodate good matching and enhanced SLL. A novel integrated matched load is also introduced for dummy ports. The designed beamformer is fabricated and tested to verify the results. A comparison between the proposed Rotman lens relative to the recent works is also provided.
The following merits can be summarized for the proposed Rotman lens beamformer taking into account addressing the mentioned problems with the existing designs. Optimized Rotman lens design methodology for wide scanning angle ± 53° covering ± 60° with 3 dB beamwidth. Low scan loss/dropping gain for the side beam (± 53°) less than 1.9 dB. Non-uniform antenna ports to satisfy the SLL > 10 dB for the side-beam (± 53°). End-fire antenna element integrated with the beamformer to eliminate the connector loss, possible dual-polarization, and stacking 2-D beamforming capabilities. PCB-based low-cost and high-performance beamformer covering multi 5G n257 , n258 , and n261 frequency bands. | Fabrication and measurement results
The designed Rotman lens beamformer including 8 beam ports and integrated 8 Vivaldi antennas is fabricated and measured as shown in Fig. 11 to validate the design and simulation results.
The SMPM cables are used for measurement to be compatible with the input connectors. One port is excited at each stage and the other ports are terminated with SMPM terminators as designed in Ref. 26 .
Due to the suitable matching of the tapered beam ports and a large number of S-parameters configurations, the only measured S-parameters are shown in Fig. 12 . Reflection coefficient and mutual coupling of the different ports are demonstrated with solid lines and dashed lines respectively.
The measured results show a good matching of the input ports ( ) and mutual coupling ( ) of better than 28 dB for the band 24–40 GHz.
To test the beamformer performance, the radiation patterns are measured and plotted in comparison with the simulated patterns at 24 GHz, 27 GHz, and 30 GHz in Fig. 13 . The simulation and measurement results are depicted in the form of solid lines and dashed lines respectively. The simulated and measured peak gain and simulated radiation efficiency of the Rotman lens beamforming network are also presented for beam ports 1, 2, 3, and 4 in Figs. 14 and 15 due to achieving identical results for the symmetric ports.
The radiation pattern results show a good agreement between simulations and measurements. The beam steering from − 53° to 53° in 15° increments with a slight beam-pointing error of less than 2° is obtained. The radiation patterns show a scan loss at a maximum scanning angle of less than 1.6 dB, 1.8 dB, and 1.9 dB at 24 GHz, 27 GHz, and 30 GHz respectively due to Rotman amplitude error and single element radiation pattern. Also, the scanning directions remain unchanged across the bandwidth as the Rotman lens is a true-time-delay beamformer. The radiation patterns exhibit some ripples for the wide angle which is due to the imperfection and multipath reflections of the anechoic chamber. The proposed Rotman lens offers the SLL better than 10 dB for whole ports that meet the minimum AAS requirements.
The measured peak gain shows the average 10 dBi gain for the center ports (Port 4 and Port 5) and the scan loss of less than 1.9 dB for the wide-angle ports (Port 1 and Port 8). It can be observed that the average radiation efficiency is 62% under every input feed port of the Rotman lens across the bandwidth.
As a result, the proposed Rotman lens provides proper beamforming capability in the wide range of ± 53° to cover ± 60° with the 3dB beamwidth in wide target frequency bands.
The proposed Rotman lens beamformer is compared with some recently recognized Rotman lenses for 5G mmWave applications as summarized in Table 6 .
As can be seen, most of the beamformer designs are restricted to the limited bandwidth and scan angle while the SLL is below 10 dB and the scan loss is almost high for the wide-angle beam. The proposed Rotman lens beamformer in this work offers a wide bandwidth (24–30 GHz) and wide scanning range of ± 60° with SLL > 10 dB and scan loss < 1.9 dB which meets the 5G AAS requirements and exhibits better performances compared to most of the other literary works. | Conclusion
Beamforming is an important part of mmWave technologies to improve the link budget and spectral efficiency and abilities of MIMO and SDMA. The beamforming requirements for 5G mmWave applications indicate the wide-angular coverage of ± 60° and SLL > 10 dB. A wide-angle Rotman lens with a detailed improved design methodology to satisfy the 5G mmWave beamforming is presented. The enhanced SLL is obtained by imposing an optimized distribution on the aperture of the antenna ports. The proposed seamless beamformer and antenna array is fabricated using low-cost PCB technology on Rogers 4350B ( ) substrate with a thickness of 0.254 mm and successfully tested to verify the feasibility of the design methodology. The overall results demonstrate that the proposed beamformer exhibits wideband impedance and radiation characteristics over a bandwidth of 24 –30 GHz and beam-scanning capability over a scan range of ± 60° with SLL > 10 dB and scan loss < 1.9 dB. Dual-polarization and 2-D beamforming configurations of the beamformer are also proposed.
The resulting beamformer features unique characteristics such as wide angular coverage with acceptable SLL and low scan loss over a wide frequency bandwidth of 24–30 GHz covering three standard 5G n257 , n258 , and n261 bands.
The proposed beamforming network is qualified for various mmWave and 5G applications such as AAS, massive MIMO systems, hybrid beamforming systems, remote sensing, and automotive radars. | The attachment of 5G with millimeter wave (mmWave) frequencies offers massive capacity and low latency to reveal the full 5G experiences. High directive gain and beamforming are considered essential for mmWave 5G systems. The main requirements of the beamforming network for 5G mmWave applications are the scanning coverage of ± 60° and SLL > 10 dB in a wide operational bandwidth over the standard 5G frequency bands. In this paper, a novel PCB-based wide-angle Rotman lens beamformer is designed, simulated, and successfully measured to meet the mentioned requirements for 5G mmWave applications. A comprehensive improved design methodology is provided for all components of the Rotman lens to reach a wide scanning angle, enhanced sidelobe level, and low scan loss. The end-fire Vivaldi antenna is selected as an array element for easy integration to the beamforming network as well as its capability to use in dual-polarization configuration. The proposed Rotman lens is operational in the 24–30 GHz frequency band covering 5G n257 , n258 , and n261 frequency bands. The results show a nearly constant 8 beams across the whole bandwidth steering from − 53° to 53° in 15° increments to provide ± 60° coverage with the SLL > 10 dB and scan loss < 1.9 dB. The retrieved novelties from this work contain an effective design methodology for an optimized Rotman lens with wide-scan angle and low phase and amplitude error, non-uniform distribution based array ports, and integration with end-fire antenna for possible dual polarization and 2-D beamforming capabilities. The comparison of the proposed beamformer with the most recent works shows several advantages in terms of integrated structure and performances including bandwidth, wide scanning angle, SLL, and scan loss. With such performances, this beamformer can be used for various mmWave and 5G applications such as advanced antenna systems, massive MIMO systems, and hybrid beamforming systems.
Subject terms | Rotman lens beamformer design
The Rotman lens is a wide-angle lens that can be utilized as a wideband beamformer. The schematic of a conventional Rotman lens is shown in Fig. 1 , which consists of a parallel-plate contour surrounded by M -beam ports and N -array ports. Each beam port steers the beam in a certain angular direction coming up with M -discrete beams. The array ports are connected via transmission lines to the radiating elements of a linear antenna array. The loaded dummy ports are connected to the parallel-plate region to provide an appropriate termination 14 .
The design of the Rotman lens starts with defining the general requirements of the beamformer such as the operating frequency range, the number of beam ports ( M ), the desired beam steering angle (± θ ), the number of radiating elements ( N ) for specific gain performance, and the spacing between array elements ( d ) 15 .
The circular arc on the left side of the Rotman lens as the beam contour has the on-axis focal length located at 0° angle and the off-axis focal length located at angles α°. The general shape of the parallel-plate contour is determined based on the four basic Rotman lens parameters as shown in Fig. 1 .
Also, the length of transmission lines connecting each array-port to the lens ( W ) is essential for Rotman length design 14 .
Initial geometrical condition of the on-axis focal length for appropriate amplitude performance and also physical arrangments of the input and output ports, considering maximum scanning angle θ and array length ( N − 1) d , can be defined as 16 , 17 :
The angle between the on-axis focal length and the off-axis focal length is the focal angle ( ) and the ratio between them is parameter
The expansion factor is the ratio between the focal angle and array beam angle as:
The indirect factor of utility controls the amplitude and phase error and corresponds to the distance of any point on the array from the axis ( ) to the on-axis focal length ( ) as expressed by:
The maximum distance of gives the maximum of the indirect factor of utility :
The upper limit of the indirect factor of utility appears when the transmission line W = 0 as:
The limiting value of the indirect factor of utility versus is depicted for some focal angle values in Fig. 2 . Due to the fact that the useful range of is between 0.5 and 0.8, Fig. 2 can be considered for choosing an appropriate range of for a given 15 .
In the case of fabricating the Rotman lens on a dielectric substrate with permittivity ( ), all dimensions of the lens are divided by a factor of .
The transmission line length that connects the element port to the array antenna can be calculated as 14 :
The parameters and have a comparable effect on Rotman lens geometry and significantly influence the gain performance and phase error. The parameters and need to be selected in conjunction with other parameters to reach the optimized gain performance and phase error reduction.
The phase error is as a result of the path length difference between an arbitrary central ray through the center of array ports and any other arbitrary ray that can be evaluated as a function of scanning angle θ and indirect factor of utility . Thus, the normalized path length error can be calculated as 14 , 17 : where is the normalized distance from a point on the beam arc to the origin, is the normalized distance of any other point, is the normalized transmission line length, is the dielectric constant, and is the effective dielectric constant of the transmission line as shown in Fig. 1 . The total path difference stemming from all ports can be expressed as 14 , 17 :
In order to estimate the amplitude performance, the approximation of the coupling between a beam port width and array port width can be presented as 17 : where is the phase constant, , is the separation between ports i and j , and and are the angles between the boresight direction of ports and the line connecting the port phase centers.
The results of the Rotman lens design show that the minimization of the only path length error does not lead to the acceptable amplitude performance 17 . Therefore, the optimum performance of the Rotman lens can be achieved by considering both path length error and amplitude performance. Various optimization methods in particular numerical-based methods can be used for choosing and optimally to reach reasonable phase error and amplitude performance simultaneously.
After designing the shape of the parallel-plate contour, the beam ports, array ports, and dummy ports should be designed. Firstly, the phase center of the corresponding ports should be determined and then the ports need to be matched with transmission lines.
The coordinate of the array port phase center can be calculated as 18 :
The phase center location for beam ports can be also expressed as:
When the location of beam ports and array ports are determined, a horn-type tapering transmission can be used for connecting the transmission lines to the lens body aiming to provide appropriate matching. Also, the wall side of the lens body is connected to the number of matched dummy ports to make a reflection-less parallel-plate contour. There is no specific requirement for the number of dummy ports. Some designers implement multiple dummy ports while others utilize a single dummy port for each side of the lens body. However, some studies indicate that the number of dummy ports does not change the main beam performance and only may affect the side lobe levels (SLLs) 19 , 20 . Therefore, the main intention of the beam port, array port, and dummy port design is to provide appropriate reflection and transmission coefficients and SLLs.
Novel wide-angle Rotman lens beamformer design
The proposed beamforming network suitable for 5G mmWave Advanced Antenna System (AAS) should be at least 8 × 8 array supporting mmWave frequency bands allocated to 5G as defined in Section I steering ± 60° and ± 15° in horizontal and elevation planes respectively. Thus, designing an 8-element array, wideband, and wide-angle (± 60) beamformer with dual-polarization capability is a basic demanding requirement for 5G AAS. A novel Rotman lens is designed to meet the basic beamforming requirements for 5G application. The Rotman lens is modeled and optimized with Matlab. The modeled beamformer is simulated and optimized using the Ansys HFSS electromagnetic simulator package. The design procedure consists of the following steps: End-fire single antenna element design for dual-polarization capability. Lens body design and optimization for wide-angle steering (± 60). Beam and array ports and connected transmission line design. Dummy port design. Non-uniform array port design for SLL reduction.
Single element design
The single antenna element suitable for 5G AAS is recommended to be dual-polarized with a half-power beam width of at least 65° and in line with 3GPP mmWave frequency bands. In addition, the single element is preferred to be integrated into the beamforming network to avoid using large numbers of connectors and associated losses. Thus, an end-fire type antenna element is selected for easy integration to the beamforming network as well as its capability to use in dual-polarization configuration. As a consequence, a novel Vivaldi antenna is designed to be utilized in the AAS beamformer. The proposed antenna is fabricated on Rogers 4350B ( ) substrate with a thickness of 0.254 mm. The design methodology, fabrication, and results are comprehensively discussed in Ref. 21 .
The structure of the proposed Vivaldi antenna element is presented in Fig. 3 . The antenna has a compact size of 12 × 5.5 × 0.254 mm 3 . Therefore, the space element is d = 5.5 mm if the antenna is used in the array. The proposed antenna can operate in 23–45 GHz covering 5G n257, n258, n259, n260, and n261 frequency bands and exhibit a nearly constant end-fire radiation pattern with a measured gain of more than 5dBi across the whole bandwidth 21 .
Lens body design
It is intended to design a basic 8-element Rotman lens beamformer between 24 and 30 GHz covering 5G n257 , n258 , and n261 frequency bands for AAS applications. The Rotman lens is designed for fabrication on a dielectric substrate. The dielectric substrate used is Rogers 4350B ( ) with a thickness of 0.254 mm. The design methodology can be presented as a step-by-step process based on the approach detailed in “ Rotman lens beamformer design ” section.
Determine the requirements
As indicated, the operational frequency band is between 24 and 30 GHz. The number of array elements is N = 8 and the number of beam ports is assumed to be M = 8. The steering angle to meet the AAS requirement is which is considered to be a wide angle. The 3dB beam width of the N -element array is:
Considering the array beam width , the maximum coverage angle can be moderated as . Thus, assuming the steering angle , the 3 dB beam width of the resultant array can cover the AAS coverage requirement .
The element spacing equal to the width of a single antenna element is d = 5.5 mm. The substrate parameters are and h = 0.254 mm. Calculate the minimum on-axis focal length using Eq. ( 1 ). Set the and and calculate the initial parameters using ( 5 ), and ( 3 ) respectively. The initial can be selected from Fig. 2 as . Optimize the value of using ( 9 ) and ( 10 ) to obtain optimum phase and amplitude performance. In this research work, the genetic algorithm (GA) is employed for numerical optimization using Matlab. The optimized parameters are indicated as: Specify the difference in transmission line length for the side array port compared to the center array port ( W ) using ( 7 ) as: Divide the dimensions by a factor of . The parameters are divided to as the designed dielectric constant is recommended 3.66 for Rogers 4350B 22 . The modeled body lens parameters are summarized in Table 2 .
Beam and array ports and transmission line design
The phase center location of beam ports can be calculated using ( 12a ) and ( 12b ) as:
Also, the phase center location of array ports can be calculated using ( 11a ) and ( 11b ) as: where the origin is the center of the array port arc. Considering the center location of the ports, the width of the lens port aperture is roughly .
The microstrip width ( ) of the feed line can be calculated as 23 : where is the characteristic impedance, h, and t are the substrate and track thicknesses respectively. Considering the 50 Ω characteristic impedance and proposed substrate parameters, . However, the microstrip line width is slightly optimized as in the simulation process.
After determining the phase center location, lens port width, and feed line width, a horn with the appropriate length is tapered toward the feed to overcome the impedance discontinuity problem due to connecting the large lens port width to the small feed line width.
According to Ref. 24 , the length of the triangular transition is suggested to be as follows: where is the width of the lens port aperture.
In this work, the tapering length is optimized based on a model and extracted results as shown in Fig. 4 to obtain optimum matching and insertion loss. As a result, the optimized horn length is where the number 4.57 in Ref. 24 is adjusted as . The schematic of the tapering transition is depicted in Fig. 5 .
Due to the very small distance between the ports and easy interconnection purpose, SMPM connectors are used. The impedance matching between the SMPM connector and the microstrip line is very sensitive for mmWave bands. The SMPM connectors are used for the beam ports using the transition procedure as detailed in Ref. 25 .
Dummy port design
In this design, we employ only a single dummy port on each side of the parallel plate contour with a wide aperture width in order to convene multiple dummy ports and simplify the structure.
The dummy ports are matched using a novel absorber sheet based termination load as described extensively in Ref. 26 .
The proposed high-performance and cost-effective microstrip termination load is based on the combination of a printed monopole antenna and an absorber sheet as shown in Fig. 6 that can be easily integrated with a microstrip line or used with a connector as a termination load for test measurement. The results show a good impedance matching between 20 and 67 GHz that can be effectively used as loaded dummy ports for mmWave applications.
In this design, this termination load is integrated into the dummy ports to provide good termination and reflection-less side walls of the Rotman lens.
Once the whole Rotman lens parameters are specified, a mathematic-based geometry of the Rotman lens can be generated by Matlab as shown in Fig. 7 .
The generated geometry can be imported to full wave simulation packages. Thus, the modeled Rotman lens geometry is imported to Ansys HFSS for simulation and further optimization.
Non-uniform array port design
The side lobe level (SLL) is a challenging parameter in the Rotman lens due to the unwanted reflections from side walls, beam and array ports, and also dummy ports 19 .
Chebyshev is a famous tapered distribution that can be used to set SLL to a specified value ( s ). In an N- element array, the peak value of the Chebyshev polynomial of the order N − 1 can be expressed as 27 : where s is SLL in dB and is the main lobe position that can be calculated as:
The half power beam width (HPBW) of the scanning array can be obtained using: where L is the array length, d is the inter-element space, and is the scanning angle.
In a non-uniform array design, the SLL can be controlled by the amplitude distribution among the elements and there is a tradeoff between SLL and HPBW where by decreasing the SLL, the HPBW is decreased 27 – 29 .
In this research work, firstly an improved distribution scheme for the target array is achieved based on a Matlab code aiming to enhance the SLL while the HPBW is almost constant. The resultant as depicted in Fig. 8 and the distribution coefficient is presented in Table 3 .
The improved non-uniform amplitude distribution is applied to the array port by altering the port width. Assuming the relation of power and impedance 22 :
The impedance of microstrip as a function of microstrip width to substrate height is 26 :
The following relation can be extracted for the width of the microstrip line:
The width of the array port based on the amplitude coefficient using (21), when the center port width is 4.65 mm can be calculated as indicated in Table 3 .
The new Rotman geometry including the non-uniform array ports as indicated in Table 3 is generated for further optimization by the full wave simulation. To this effect, the Rotman lens parameters, and amplitude weighting together with the position of the array ports are optimized using genetic algorithm (GA) by HFSS in terms of phase and amplitude errors and SLLs in the widest angle beam ( ) as the worst-case for 24 GHz and 30 GHz as the start and stop operating frequencies. The optimized array ports width and corresponding distribution coefficient are shown in Table 4 . The symmetrically oriented array port numbers can be found in Fig. 7 .
The simulated resultant radiation patterns by exciting different beam ports for uniform and improved non-uniform distribution of array ports are presented in Fig. 9 .
It is clear that applying an optimized non-uniform distribution coefficient improves the SLL as well as the amplitude and phase performances. The minimum SLL for uniform distribution is around 9 dB while it is increased to around 12 dB for optimized distribution.
The structure of the final Rotman lens beamforming network with optimized parameters is shown in Fig. 10 . Also, Table 5 summarizes the designed and optimized Rotman lens parameters.
It can be concluded that the optimized values are very close to the designed values confirming a good convergence between the optimized simulation parameters and the proposed design procedure.
Possible dual-polarized 2-D configuration
The end-fire antenna element integrated with the beamforming network facilitates dual polarization implementation and possible stacking to obtain 2-D beamforming.
To design the 8 × 8 dual-polarized 2-D AAS, the proposed Rotman lens beamformers are crossed and vertical to each other as shown in Fig. 16 .
To realize the crossing, the extended length of the transmission lines is equal to the self-length of the Rotman lens. 8 proposed Rotman lens beamformers are staked along the x -direction and y -direction with stacking spacing d (Fig. 16 ).
The first stage of the beamformer can be connected to the second stage of the beamformers via the SMPM connectors to construct the 2-D beamforming network. The recommended steering angle of the second stage of the beamformer is ± 15° (Table 1 ) which can be generated using a 3-beam port Rotman lens with a step angle of 10°. | Author contributions
A.A.; Formal analysis, Resources, Writing-original draft, Writing-review & editing, A.S.; Supervision, H.A.; Supervision.
Data availability
All the data required to evaluate the findings of this work is available in the manuscript. Any other additional data related to this work may be requested from the corresponding author.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:58 | Sci Rep. 2024 Jan 13; 14:1245 | oa_package/08/dc/PMC10787774.tar.gz |
||
PMC10787775 | 38218748 | Introduction
The cilium, a highly conserved organelle, extends from the cell surface and serves a variety of functions. Structurally, it consists of the ciliary membrane, axoneme, and basal body [ 1 , 2 ]. As an essential organelle, the cilium is involved in several cellular process, including sensory perception, cellular motility, signaling and communication, cell division and differentiation, and cell-to-cell communication [ 3 ]. Additionally, the cilium also contributes to tissue homeostasis and developmental signaling [ 4 – 7 ]. Consequently, aberrations in the structural integrity or functional capacity of cilia are implicated in a spectrum of genetic disorders collectively termed ciliopathies [ 4 , 8 ]. These conditions present a diverse array of pathologies. Polycystic kidney disease (PKD), for instance, emerges from genetic mutations that trigger the formation of multiple cysts in kidney tissues [ 9 ]. Similarly, Bardet-Biedl syndrome (BBS) originates from genetic anomalies and is characterized by a multi-systemic impact. Individuals with BBS often exhibit a combination of symptoms such as progressive vision loss, obesity, polydactyly, and kidney irregularities [ 10 – 12 ].
Ciliary dysfunction and ciliopathies occur due to the absence or malfunctioning of proteins essential for ciliogenesis [ 9 , 13 ]. These proteins required for ciliogenesis are almost synthesized in the cytoplasm and subsequently transported to the cilium through a specialized process known as intraflagellar transport (IFT) [ 14 ]. IFT is a complex and highly regulated microtubule-based transport system that facilitates the movement of proteins along the ciliary axoneme [ 15 ]. This system is anchored by two principal protein complexes: the retrograde IFT-A complex and the antegrade IFT-B complex. The bidirectional movement of IFT complexes within cilia relies on distinct motor proteins. Kinesin-2 is responsible for anterograde transport, moving the IFT-B complex toward the ciliary tip, while dynein-2 facilitates retrograde transport, returning the IFT-A complex to the base. This arrangement ensures coordinated bidirectional trafficking along the cilium [ 16 ]. The core IFT machinery, together with the motor proteins, mediate the trafficking of cilia structural and signaling proteins.
Furthermore, Recent studies have demonstrated that IFT-independent kinesins, also termed as non-IFT kinesins, which do not directly transport cargos in conjunction with the IFT system, also play important roles in ciliogenesis. For example, mutations in Kif7 , Kif9 , Kif11 , or Kif19A causes abnormality in ciliary length as well as ciliopathy-related phenotypes [ 17 – 22 ]. Hence, in this review, we mainly focus on the role and mechanisms of non-IFT kinesins in ciliary formation and highlight their unique features compared to IFT kinesins. By gaining a deeper understanding of these mechanisms, insights can be gained into the modulation of ciliogenesis and can inform the development of new therapeutic strategies for ciliopathies. | Cilia are highly conserved eukaryotic organelles that protrude from the cell surface and are involved in sensory perception, motility, and signaling. Their proper assembly and function rely on the bidirectional intraflagellar transport (IFT) system, which involves motor proteins, including antegrade kinesins and retrograde dynein. Although the role of IFT-mediated transport in cilia has been extensively studied, recent research has highlighted the contribution of IFT-independent kinesins in ciliary processes. The coordinated activities and interplay between IFT kinesins and IFT-independent kinesins are crucial for maintaining ciliary homeostasis. In this comprehensive review, we aim to delve into the specific contributions and mechanisms of action of the IFT-independent kinesins in cilia. By shedding light on their involvement, we hope to gain a more holistic perspective on ciliogenesis and ciliopathies.
Subject terms | Facts
Cilium assembly involves a specialized protein transport mechanism known as intraflagellar transport (IFT), which is characterized by the bidirectional trafficking of a large protein complex along the microtubules within cilia. The anterograde movement of the IFT is facilitated by members of the kinesin-2 family, typically referred to as IFT-dependent kinesins. IFT-independent kinesins, also termed non-IFT kinesins, refer to a broad category of motor proteins that do not directly transport cargos in conjunction with the IFT system. Non-IFT kinesins have been found located at the basal body or axoneme of cilia and contribute to the maintenance of cilia and ciliary signaling pathways. Mutants of numerous non-IFT kinesins are intricately linked with a spectrum of ciliopathies.
Open questions
What are the specific mechanisms by which non-IFT kinesins coordinate their actions during various stages of ciliogenesis, and how do they contribute to this complex cellular process? What is the physiological and pathological significance of the non-IFT kinesin-mediated ciliary homeostasis in tissue development and human disease? While certain correlations between non-IFT kinesin and ciliopathies have been established, the underlying mechanisms remain elusive. Can non-IFT kinesins be therapeutically targeted for the treatment of ciliopathies?
Cilia: conserved and multifunctional organelles
Cilia are microtubule-based organelles prevalent in a myriad of cell types, playing vital roles in various cellular activities. These organelles can be bifurcated into two main categories: motile cilia and primary cilia. Both types feature an axoneme composed of microtubules. Motile cilia are characterized by the “9 + 2” axoneme arrangement, which consists of nine pairs of doublet microtubules surrounding a central pair. The outer doublets of a motile cilium are linked to dynein arms and radial spokes, which are pivotal in controlling the direction and force of ciliary beating. On the other hand, primary cilia exhibit a “9 + 0” axoneme configuration, lacking the central microtubule pair, dynein arms, and radial spokes, thereby rendering them immotile [ 16 ] (Fig. 1 ).
Other components of the cilium include the ciliary membrane, basal body, and transition zone. The ciliary membrane, which is connected to the plasma membrane, envelops the entire axoneme of the cilium. This membrane is enriched with various signaling receptors and ion channels, including those involved in the Hedgehog pathway and Ca 2+ channels, enabling the cilium to function as an important signaling hub [ 4 , 23 ]. The basal body, derived from the mother centriole, reverts back to a centriole during ciliary disassembly preceding cell division [ 1 , 24 ]. The transition zone, located between the basal body and axoneme, regulates the influx and efflux of liquids and proteins, thus establishing the composition within the cilia [ 25 ] (Fig. 1 ). The collective contribution of these intricate structures and components determines the architecture and performance of cilia in cellular processes.
Motile cilia are designed for dynamic movement, facilitating the generation of directed fluid flows through coordinated activity. In contrast, primary cilia act as sensitive probes, capturing various signals from the environment, and triggering responses that are crucial for regulating cell division, development, gene activity, migration, and overall cell and tissue morphology. Owing to their extensive presence in mammalian organisms and critical role in signaling pathways, the same ciliary gene mutations or abnormal expression has the potential to cause varying manifestations of ciliary abnormalities and inconsistent symptoms of ciliopathies [ 8 , 26 , 27 ]. The variability in symptoms can result from factors such as genetic background, environmental influences, the extent of gene mutation or dysregulation, and the specific cell types or tissues affected. Nonetheless, the specific mechanisms underlying ciliopathies remain elusive, leaving ample scope for discovery in this field.
IFT: protein translocation machinery in cilia
During the growth of the cilium, the axoneme is assembled by the addition of new axonemal subunits to its distal tip. However, cilia lack the machinery that is necessary for protein synthesis, the site of assembly of the axoneme is far removed from the cell body, where the building materials are synthesized. The cell has solved this problem for the delivery of new axonemal building blocks to the site of axonemal assembly by means of IFT [ 28 , 29 ]. During IFT, the non-membrane-bound particles are moved along the axonemal doublet microtubules, and beneath the ciliary membrane. The anterograde IFT-B particles moving from the ciliary base to the tip for the proper assembly and maintenance of ciliary axoneme and membrane. At the ciliary tip, the building blocks are released, and IFT-B particles are then transported back by IFT-A to the ciliary base [ 16 ]. This IFT process is well conserved and required for the assembly of most cilia and eukaryotic flagella. The movement of these IFT particles are driven by motor proteins, including the anterograde kinesin and the retrograde dynein proteins, to move up and down the cilium [ 30 ].
Classification and characterization of kinesins
Kinesins constitute a superfamily of proteins with 15 members, classified into 14 subclasses (kinesin 1 to kinesin 14B) through phylogenetic analysis [ 31 ] (Fig. 2A ). Each member of the kinesin superfamily (KIF) possesses a common motor domain, which utilizes the chemical energy from ATP hydrolysis to initiate movement along microtubules. These kinesins are generally divided into three categories based on the location of the motor domain: N-kinesins carry a motor domain in the amino-terminal region, M-kinesins have their motor domain in the middle, and C-kinesins contain it in the carboxy-terminal region. Typically, N-kinesins show directed motility towards the plus (rapidly growing) end of the microtubule, while C-kinesins move towards the minus (slowly growing) end. In contrast, M-kinesins destabilize microtubules instead of migrating along them (Fig. 2B ). However, some kinesin-8 (N-kinesin) and kinesin-14 (C-kinesin) motors can both traverse and depolymerize microtubules [ 32 – 34 ]. Furthermore, certain kinesin-5 and kinesin-14 families can cross-link and slide adjacent microtubules, adding complexity to these generalizations [ 35 , 36 ].
In addition to the motor domain, many kinesins encompass a neck linker region, a stalk region, and a tail domain [ 37 ]. The neck linker region, connected to the motor domain, acts as a flexible hinge and assists in transmitting conformational changes during the ATP hydrolysis cycle. The stalk region ensures stability, connecting the motor domain to the cargo-binding tail domain. This tail domain interacts with specific cargo molecules, enabling kinesins to transport various cargoes within cells. Further, coiled-coil segments for oligomerization are present in many kinesins, making most kinesin motors homodimers. For instance, kinesin-1 motors are heterotetramers comprising two subunits: kinesin heavy chain and kinesin light chain; kinesin-2 motors split into two subfamilies, either heterotrimers (KIF3A-KIF3B-kinesin associated protein) or homodimers (KIF17); kinesin-3 motors may exist as monomers or homodimers; and the kinesin-5 family consists of homotetrameric motors (Fig. 2A ) [ 38 , 39 ]. These kinesins, which all belong to the N-kinesins, actively transport cargoes directionally towards the plus-end of microtubules that form cylindrical polymers of 13 protofilaments. Despite the highly conserved nature of their motor domains, their cellular functions vary due to differences in these structural components.
Kinesins play a plethora of roles in a microtubule-dependent manner. One of their primary functions is vesicle transport, as kinesins assist in the movement of vesicles containing important molecules and organelles to specific locations within cells [ 39 ] (Fig. 3A ). This transport process is crucial for maintaining cellular functionality and assuring proper distribution of essential components. Another pivotal role of kinesins is macromolecule transport; these proteins aid in the movement of large molecules, such as proteins and nucleic acids, within the cell [ 40 , 41 ]. By facilitating the transport of these macromolecules, kinesins contribute to key cellular processes like gene expression and cellular signaling. Kinesins also participate in cell division processes tied to mitosis and meiosis, participating in chromosome segregation to ensure accurate distribution of genetic material to daughter cells [ 42 ].
It’s noteworthy that kinesins, specifically the known IFT kinesin (kinesin-2), contributes to the dynamic nature of cilia and ensure their proper functioning [ 43 ]. By transporting cargoes and signaling molecules and receptors to and from the ciliary membrane, kinesin-2 influences the extension of cilia and the modulation of ciliary signaling pathways [ 44 ] (Fig. 3B ). Accumulated evidence suggests that a distinctively longer neck linker region in the kinesin-2, which includes an additional three amino acid residues (Asp-Ala-Leu, DAL) at the C-terminus prior to helix α7 [ 45 ]. This characteristic underpins the mechanistic foundation for its shorter run lengths, a trait that seems to be adapted to its specific role of transporting ciliary proteins along the axoneme of cilia.
Moreover, recent works have underscored the significant involvement of non-IFT kinesins in the assembly and maintenance of cilia. While the mechanisms through which non-kinesins contribute to ciliary homeostasis remain an active area of research, preliminary findings suggest that these kinesins may be involved in various ciliary processes, such as the regulation of ciliary length, the transport of specific cargoes, or the modulation of ciliary signaling pathways. Therefore, further research into the roles of these non-IFT kinesins may yield new insights into the molecular mechanisms underlying ciliary function and dysfunction, and ultimately lead to the development of novel therapeutic strategies for ciliopathies.
IFT kinesins
There are two types of anterograde IFT motors, namely the heterotrimeric kinesin-2 and the homodimeric OSM-3 or KIF17. These kinesins belong to the kinesin-2 family, also known as IFT kinesins. The heterotrimeric kinesin-2 complex consists of KIF3A/KIF3B/KIFAP3 and can move at a rate of 0.2-2.4 μm/s, depending on the species and ciliary type [ 46 ]. On the other hand, OSM-3 or KIF17 acts as a homodimer and moves approximately 1.3 μm/s along the ciliary axoneme [ 47 , 48 ] (Fig. 3B ). The biogenesis of cilia requires the anterograde IFT driven by kinesin-2, as it is responsible for transporting IFT trains. These trains are believed to deliver axoneme precursors to the tip of the axoneme, where they are incorporated, and to organize and move ciliary membrane-associated signaling complexes. For example, in the green alga Chlamydomonas , inactivation of the FLA10 subunit of heterotrimeric kinesin-2 using conditional mutants leads to a gradual halt in IFT and defects in the assembly or maintenance of motile cilia [ 49 ]. This observation supports the hypothesis that heterotrimeric kinesin-2 drives the anterograde transport of IFT trains.
The role of kinesin-2 motors in the assembly of sensory cilia in Caenorhabditis elegans amphid channels differs and presents a more intricate process [ 50 ]. The axonemes of these cilia possess a bipartite structure characterized by a core comprising nine doublet microtubules known as the middle segment. From this middle segment, nine singlet microtubules extend to form the distal segment, which plays a critical role in certain forms of chemosensory signaling. The assembly of these axonemes involves a unique and unexpected collaboration between the heterotrimeric kinesin-2, kinesin-II, and the homodimeric kinesin-2, OSM-3. In this collaboration, the middle-segment assembly involves both motors transporting IFT trains along the middle segment, while the distal-segment assembly depends only on OSM-3 transporting IFT trains along the distal segment. Therefore, in wild-type animals, kinesin-II and OSM-3 both contribute redundantly to the assembly of the middle segment, while OSM-3 alone is responsible for constructing the distal segment.
In contrast, the cilia found on olfactory receptor neurons in Drosophila also exhibit a bipartite organization and develop through a different two-step pathway [ 51 ]. However, in this case, heterotrimeric kinesin-2 alone appears to be sufficient for the assembly of the entire axoneme. In mice, heterotrimeric kinesin-2 may have additional ciliogenic functions beyond driving IFT that cannot be compensated for KIF17, as it is required for the proper organization of centrioles, which form the basal body of the cilium [ 52 ]. Additionally, in zebrafish, the absence of KIF17 results in a loss or disorganization of outer segments in retinal photoreceptors, while it does not affect the formation of motile cilia in the pronephros [ 53 ]. These observations indicate that diverse mechanisms for employing kinesin-2 motors have evolved to facilitate cilium assembly.
Non-IFT kinesins
Beyond the well-known IFT kinesins, recent works have unveiled the involvement of non-IFT kinesins in ciliary homeostasis maintenance. These kinesins were found located at the basal body or axoneme of cilia and contribute to regulate ciliary length and the ciliary signaling pathways (Fig. 4 ). Such roles could be expected for the members of the kinesin-13 and kinesin-4 subfamily, known to have microtubule depolymerizing activities, therefore, negatively controlling the length of axonemal microtubules and the ciliary dependent Hedgehog signaling pathway [ 20 , 21 , 54 ]. In addition to the depolymerizing kinesins, knockout of certain kinesin genes has identified several new kinesin members involved in diverse function at cilia.
Kinesin-1 (KIF5B)
Kinesin-1, the first identified plus-end-directed microtubule motor, is involved in various cellular processes through its interactions with different cargoes such as vesicles, organelles, mRNAs, and multiprotein complexes [ 55 , 56 ]. Kinesin-1 is a heterotetramer composed of two heavy chains and two light chains. The microtubule binding motor region is found in the N-terminus of the heavy chain, which can be encoded by three different genes ( Kif5A, Kif5B, Kif5C ). KIF5A and KIF5C are expressed exclusively in neurons, while KIF5B is ubiquitous. Each heavy chain dimer associates with two copies of KLC1 or KLC2, which are expressed in most cell types [ 39 ].
Studies have indicated that KIF5B and KLC1 localized to the basal body and play an inhibitory role in ciliary extension, as depletion of these proteins leads to abnormally elongated cilia. Knockdown of KIF5C alone does not significantly affect ciliary length, and KIF5A is not highly expressed in hTERT-RPE cells, a cell line known to induce ciliary formation in vitro. Furthermore, genetic interaction studies suggest that the nuclear/cytoplasmic distribution of CCDC28B, a protein associated with Bardet-Biedl syndrome, is influenced by KIF5B, as targeting KIF5B leads to nuclear accumulation of CCDC28B [ 57 ].
Kinesin-3 (KIF13B/KLP-6)
Kinesin-3 family members are plus-end directed motors involved in vesicle transport and endocytosis. Among them, KIF13B (also known as guanylate kinase-associated kinesin or GAKIN) is implicated in the regulation of neuronal polarity, axon formation and myelination, Golgi to plasma membrane trafficking, germ cell migration, and planar cell polarity signaling [ 39 ].
Recent studies have shown that KIF13B undergoes bursts of IFT-like bidirectional movement within primary cilia, and its depletion leads to ciliary accumulation of the cholesterol-binding membrane protein CAV1 and impaired Hedgehog signaling [ 58 , 59 ]. Additionally, the velocities of anterograde and retrograde intraciliary movement of KIF13B are similar to those of IFT, but its movement within the cilium requires its own motor domain. Interestingly, the homolog of KIF13B, KLP-6, has been observed to move in cilia of Caenorhabditis elegans and modulate the velocities of IFT and kinesin-2 motors. KLP-6 acts as a positive regulator of ciliary length extension, as its accumulation in the cephalic male cilia promotes elongation of cilia [ 60 ]. This demonstrates the modulation of general kinesin-2-driven IFT processes by kinesin-3 in the cilia of Caenorhabditis elegans male neurons.
Kinesin-4 (KIF7/KIF27)
Kinesin-4 is a remarkable motor protein due to its unique ability to depolymerize microtubules [ 61 ]. It plays critical roles in cell division, microtubule organization, and signal transduction. Among its members, KIF7 serves as a conserved regulator of the Hedgehog signaling pathway. This kinesin facilitates the transmission of signals from the membrane protein Smoothened to the Gli/Gi transcription factors. A recent finding suggests that KIF7 regulates the length of the microtubule plus end and promotes the precise localization and proper regulation of Gli and the inhibitory factor Sufu at the tip of primary cilia. Furthermore, KIF7 mutations cause primary cilia abnormalities, including excessive length, twisting, and instability. These defects lead to the formation of ectopic tip-like compartments where Gli-Sufu complexes become localized and inappropriately activated in the absence of the sonic hedgehog ligand [ 21 ].
Another member of the kinesin-4 family, KIF27, also plays a role in cilia-related processes. KIF27, the closest mammalian homologue of KIF7, is found in motile cilia and share the ability of KIF7 to regulate axonemal microtubule dynamics. Specifically, KIF27 contributes to the assembly of the central pair of microtubules in “9 + 2” motile cilia through its interaction with Fused [ 62 ]. Mice with defective KIF27 exhibit suppurative inflammatory responses in the nasal passages and middle ear, as well as hydrocephalus [ 63 ].
Kinesin-5 (KIF11)
Kinesin-5, also known as kinesin family member 11 (KIF11) or Eg5, plays crucial roles in the formation and maintenance of bipolar spindle orientation during cell division. These activities are facilitated by its unique antiparallel tetrameric structure, which enables the motor protein to crosslink and slide adjacent microtubules [ 64 ]. Apart from its mitotic functions, KIF11 has also been found to have non-mitotic roles, including protein transport from the Golgi complex to the cell surface, regulation of axonal growth and branching, and ciliary formation [ 17 , 18 , 65 , 66 ].
Our previous research has shown that KIF11 localizes to the basal body of primary cilia in various cell types. Knockdown of KIF11 expression in RPE1 cells leads to a decrease in ciliary length and number and perturbs Hedgehog signaling [ 17 ]. Another study further supports the non-mitotic role of KIF11 in cilia, demonstrating that KIF11 plays a critical role in regulating ciliary behavior [ 18 ]. Moreover, KIF11 expression is significantly higher in glioblastoma cells compared to normal cells, and there is also an overexpression of Hedgehog signaling in glioblastoma [ 67 ]. These suggest that KIF11-mediated ciliogenesis may contribute to the overactivation of Hedgehog signaling in glioblastoma cancer cells, which holds potential implications for future cancer treatment strategies.
Kinesin-8 (KIF19A)
Kinesin-8 members possess remarkable capabilities of both walking towards the plus-ends of microtubules and depolymerizing these ends upon arrival, thereby exerting control over microtubule length [ 33 ]. These motor proteins are observed on cytoplasmic microtubules during interphase and near kinetochores during cell division. Disruption of their function during mitosis leads to the formation of excessively long spindle microtubules, resulting in aberrant chromosomal segregation. This observation strongly supports the notion that precise regulation of microtubule length by kinesin-8 motors is crucial for accurate cell division.
Among these motors, KIF19A has been extensively studied for its role in regulating ciliary length by depolymerizing microtubules at the tips of cilia. Depletion of KIF19A in mice results in the manifestation of ciliopathy phenotypes, including hydrocephalus and female infertility, caused by the presence of abnormally elongated cilia that are unable to generate proper fluid flow [ 68 ]. Recent research has indirectly demonstrated that KIF19A plays a pivotal role in mediating ciliary length in mammals. For instance, depletion of adenylate cyclase 6 in mice leads to elongated cilia in airway epithelial cells, primarily due to decreased KIF19A protein levels in the cilia resulting from its degradation through autophagy [ 69 ]. These studies shed light not only on the genetic regulation of cilia by KIF19A but also on the mechanisms underlying the regulation or control of KIF19A itself.
Kinesin-9 (KIF9A/KIF9B)
Kinesin-9 members are motor proteins that are exclusively expressed in tissues containing motile cilia or flagella, such as the testis, brain, and lung, as well as in flagellated microorganisms like Giardia, Leishmania, and Chlamydomonas . These kinesin motors primarily move towards the plus end of microtubules. The kinesin-9 family consists of two subfamilies: KIF9A, which includes Chlamydomonas reinhardtii KLP1, and KIF9B, which includes human KIF6. KLP1 is localized to the central pair microtubules of the axoneme and plays a role in influencing flagellar motility [ 70 ]. Disruption in KLP1 function leads to flagella that beat slowly or become paralyzed.
Recent studies have highlighted the importance of KIF9 in ciliary motility. KIF9 is highly conserved across evolutionary species and is considered the vertebrate ortholog of KLP1. It has been reported that KIF9 localizes to the axoneme of sperm flagella and cilia in multiciliated cells, such as those found in Xenopus and human airways. KIF9 is responsible for maintaining proper ciliary motility and the integrity of the distal end of the axoneme [ 19 ]. In contrast, KIF6 is localized to both the axoneme and basal body of multiciliated cells. It is not only essential for ciliary motility but also plays a specific role in the formation of cilia in ependymal cells. Studies have shown that mutations in Kif6 can lead to neurodevelopmental defects and intellectual disability in humans [ 71 ].
Kinesin-13 (KIF24/KIF2A)
The kinesin-13 family specifically contains M-kinesins. Unlike conventional kinesins, kinesin-13 proteins do not walk along microtubules but instead depolymerize them using ATP. This depolymerizing activity of kinesin-13 proteins operates in a range of physiological contexts such as spindle assembly, chromosome segregation, and axonal growth.
Early studies have shown that kinesin-13 members in Giardia, Leishmania, and Chlamydomonas are localized to axonemes and play a role in regulating the length of flagellar [ 72 – 74 ]. However, in the mammalians, the kinesin-13 members consist of KIF2A, KIF2B, KIF2C/MACK, and KIF24. KIF24 has been reported to block ciliogenesis by recruiting CP110 at the mother centrioles and remodeling centriole microtubules through its microtubule-depolymerizing activity [ 24 , 54 ]. Moreover, research has demonstrated that even in cycling cells, knockdown of KIF24 by small interfering RNA leads to inappropriate ciliogenesis. Another kinesin-13 member, KIF2A, has been shown to have the ability to disassemble primary cilia by depolymerizing microtubules in response to growth signals, with its activity controlled by the PLK1 [ 75 ].
Emerging roles of non-IFT kinesins in ciliopathies
Considering the pivotal contribution of non-IFT kinesins to the maintenance of ciliary homeostasis, it is unsurprising that these kinesin motors are intricately linked with a spectrum of ciliopathies. Microcephaly, a neurological malformation that characterized by an abnormal small head circumference, is one of the most frequently associated clinical signs [ 76 ]. Notably, mutations in the genes encoding kinesin motors-KIF1B, KIF14, KIF16B, KIF11, KIF10, KIF15, and KIF2A-have been identified in numerous patients with microcephaly [ 77 – 79 ].
Non-IFT kinesins are also involved in other neuronal disorders related to ciliopathies. For instance, KIF4A , KIF6 and KIF7 has ascended to prominence as a putative gene of interest in the etiology of hydrocephalus [ 80 – 82 ]. Investigations into the developmental biology of KIF26A underscore its potential role in neural system development, as knockout mice models reveal critical deficits such as enteric nerve hypoplasia [ 83 , 84 ]. The proteins KIF1A and KIF5 are of paramount importance for higher-order brain functionalities, namely learning and memory, exerting influence through the modulation of synaptic transmission [ 85 ]. Peripheral neuropathies represent yet another sphere in which KIF1A and KIF1B demonstrate a genetic association [ 86 ].
Transgenic models, particularly mice with targeted deletions of KIF genes, have surfaced with a spectrum of ciliopathy syndromes. These include kidney disorders resultant from KIF26B mutations [ 87 ], and KIF19A depletion leading to female infertility [ 22 ]. Complementing these insights, recent discoveries have delineated biallelic variants of KIF24 as pathogenic factors in skeletal ciliopathies, encompassing variants such as acromesomelic skeletal dysplasia and spondylometaphyseal dysplasia [ 88 ]. Furthermore, genetic variants in KIF1B , KIF21B , and KIF5A have been associated with increased vulnerability to multiple sclerosis [ 89 – 91 ]. Collectively, these evidences reinforce the notion that non-IFT kinesins are crucial to ciliary function and, when impaired, to the pathogenesis of a multitude of abnormalities related to ciliopathies.
Concluding remarks
The study of kinesins and their roles in cilia biology has undergone significant advancements over recent years, revealing the intricate mechanisms by which these motor proteins contribute to ciliary assembly, maintenance, and function. In this review, we have discussed the emerging roles of non-IFT kinesins in cilia-related processes, providing insights into their diverse functions and their implications for cellular homeostasis and human health. While IFT kinesins have long been recognized as central players in cilia assembly and maintenance, the discovery of non-IFT kinesins’ involvement adds a layer of complexity to our understanding of ciliary activities. Emerging evidence compellingly indicates that the various kinesin families are interdependent, collaboratively maintaining ciliary homeostasis. The observed interplay between IFT-associated and non-IFT kinesin proteins poses fascinating questions regarding their mechanisms of communication and cooperation. This teamwork is crucial for the modulation of the ciliary length, the precision of cargo transport, and the nuanced modulation of signaling pathways. Future research aimed at deciphering the crosstalk between these kinesin families will provide deeper insights into the mechanisms governing cilia biology.
The identification of non-IFT kinesins as critical players in cilia-related processes has important implications for our understanding of ciliopathies. Mutations in various ciliary components, including kinesins, have been linked to the development of ciliopathies, underscoring the significance of these motor proteins in maintaining cellular homeostasis [ 8 , 13 ]. For example, the presence of KIF11 in the connecting cilium of photoreceptors, and the identification of KIF11 mutations in patients with retinal diseases such as MLCRD (microcephaly, lymphedema, and chorioretinal dysplasia), CDMMR (chorioretinal dysplasia, microcephaly, and mental retardation), and FEVR (familial exudative vitreoretinopathy) suggest that KIF11 may play a vital role in the pathological processes of these conditions by mediating photoreceptor ciliary homeostasis [ 79 , 92 , 93 ]. Elucidating the roles of non-IFT kinesins in cilia biology may offer valuable insights into the molecular mechanisms underlying ciliopathy pathogenesis. Furthermore, as multiple members of the kinesin family are continuously being identified as potential targets for treating various diseases, including cancer [ 94 ], exploring cilia and ciliary proteins as a strategy for addressing ciliopathies holds great promise [ 95 , 96 ].
As the field of kinesin research continues to advance, several intriguing questions and avenues for future investigation arise. For example, ciliary homeostasis represents a complex and finely tuned regulatory process encompassing assembly, disassembly, and maintenance phases [ 97 ], yet the specific contributions of different kinesin proteins within this balance are not well understood. Moreover, the mechanisms by which multiple members of this large kinesin family work in concert remain elusive. Most importantly, the physiological and pathological significance of kinesin-mediated ciliary homeostasis in development and human disease remains unclear. These knowledge gaps present a compelling case for future research to unravel the intricate orchestra of kinesin activities that maintain ciliary homeostasis and to decipher their broader implications in health and disease.
The core IFT-dependent machinery is crucial for the transport of ciliary and signaling proteins. However, in certain ciliated protists that are devoid of genes encoding for IFT components, and in conjunction with some metazoan spermatozoa, they use IFT-independent mechanisms for assembling axonemes that exposed to the cytosol [ 98 , 99 ]. During this process, all or portion of this axoneme, at least temporarily, is not enveloped by plasma membrane but is instead exposed to the cytoplasm. This distinct IFT-independent ciliogenesis pathway permits a robust exchange with cytosolic proteins, and consequently, IFT is presumably excluded from playing any direct part in such cytosolic ciliogenesis events. This unconventional pathway delivers profound insights into the molecular machinations that govern non-IFT kinesins in maintaining ciliary homeostasis. For example, the basal body-localized KIF11, a newly-identified pivotal protein in ciliogenesis, which strikingly lacks inherent motor activity but is vitally influential in ciliary length. To date, the molecular mechanisms underpinning the role of KIF11 are unclear. With this context, we postulate that KIF11 may harness a mechanism resonant with the IFT-independent ciliogenesis pathway. Such a role would likely entail exploiting the cytoplasm’s microtubule framework to effectuate the translocation and assembly of requisite constituents for ciliary construction, providing a greater understanding of this kinesin protein’s involvement in the complex narrative of cilia formation and maintenance.
The development of advanced imaging techniques will enable researchers to visualize kinesin behavior within cilia with unprecedented detail, offering new insights into their functions. Continued functional studies in model organisms and human genetics will provide valuable information about the roles of non-IFT kinesins in various biological contexts. In conclusion, the emerging roles of non-IFT kinesins in cilia biology have broadened our understanding of ciliary dynamics and cellular function. These motor proteins contribute to a range of processes within cilia, including assembly, length regulation, cargo transport, and signaling. By shedding light on the specific ciliary activities of non-IFT kinesins, their implications for ciliopathies, and their diverse functions beyond cilia, this review emphasizes the intricate and multifaceted nature of kinesin-mediated regulation in cellular processes. As research in this field progresses, we anticipate that further insights into the roles and mechanisms of non-IFT kinesins will continue to shape our understanding of cellular biology and human health. | Author contributions
JR and LL drafted the original manuscript and prepared the figures, JR revised the manuscript. All authors have read and agreed the final version of the manuscript.
Funding
This work was supported by grants from the National Natural Science Foundation of China (32241014 and 32170687).
Data availability
Data sharing is not applicable, as no datasets were generated or analyzed during this study.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:58 | Cell Death Dis. 2024 Jan 13; 15(1):47 | oa_package/ab/2b/PMC10787775.tar.gz |
||||
PMC10787776 | 38218731 | Introduction
Understanding the changing characteristics of floods 1 and their relationship with the physical causative mechanisms is a prerequisite for developing effective flood management strategies 2 , 3 . Physical causes of short term variability and long term changes in extreme floods vary between the catchments 4 , 5 . Investigating the relative importance of key drivers of floods is therefore critical for improving the scientific understanding of catchment dynamics. Changing characteristics and causes of floods are well documented across many catchments in Europe 3 , 6 – 10 and United States 2 with extreme rainfall, soil moisture excess and snowmelt as potential drivers. Influence of rainfall and soil moisture extremes on flood peaks is evaluated in Australian 11 , 12 and African 13 catchments. However, there is no systematic study on identifying the importance of flood generating mechanisms in Indian catchments.
There is significant evidence that rainfall extremes are intensifying in response to warming 14 , 15 , whereas the evidence for increase in floods remains elusive 16 . Increasing trends in rainfall extremes 17 – 20 and increase in the flood risk 21 , 22 are reported in India. However, studies on understanding the physical causes of such trends remain limited 23 – 25 . A recent study identifies multiday rainfall as a prominent driver of floods in India by examining the soil moisture conditions and rainfall before high flow events simulated using the Variable Infiltration Capacity (VIC) model 26 . Authors adopt an event-based approach to identify the flood drivers but the analysis does not consider the role of groundwater in triggering floods. Floods have serious impacts on agriculture, infrastructure, water resources systems and reservoir operations. Therefore, a detailed assessment is required to classify the flood generating mechanisms in Indian catchments. Identifying the hydrological processes which trigger floods will not only improve our understanding of flood mechanisms in Indian catchments but it will also provide a foundation for robust flood risk assessment.
Rainfall and subsurface antecedent wetness conditions prior to the flood event are the primary drivers of floods in India 24 , 26 as snowmelt triggers floods in a few catchments 27 , 28 . The role of soil moisture in driving river floods is widely recognized in literature 2 , 3 , 11 – 13 , whereas groundwater is not considered in flood related studies. Groundwater plays an important role in maintaining the flow of rivers, but its influence on floods is poorly understood 29 . Groundwater well observations are sparse and may not represent the influence of water storage in deeper saturated zone on floods. Therefore, baseflow is used to understand the role of groundwater storage in controlling floods in Peninsular India.
Annual maximum flows are the largest floods experienced in a year and often represent the most disastrous flood event. Trends in the annual flood magnitudes are estimated to understand the changes in water availability. Impact of reservoirs is also examined in this study to understand the influence of flow regulations in Peninsular India. The natural flow regime of rivers is largely altered due to boom in dam construction across the world during the last century 30 . The regulation of rivers with reservoirs for different purposes such as irrigation, hydropower, water supply and flood control significantly alters the downstream flow by storing and releasing water with certain operation rules. Flow regulations affect the magnitude, frequency and timing of downstream high and low flows 31 – 34 . Therefore, it is important to study the impact of reservoirs on the flow regime using downstream streamflow records. This study presents the analysis of pre- and post-dam construction high flow changes to understand the influence of reservoirs on annual flood characteristics: peak, volume and duration. Understanding the extent to which the reservoir regulations affected the flood characteristics is crucial for designing better reservoir operation rules in Peninsular catchments.
The importance of rainfall, soil moisture and baseflow for generating floods in Peninsular India is investigated first using Kendall’s rank correlation coefficient and Pearson correlation coefficient for different lags at annual timescale. However, high flows with a little lower magnitude than the annual maximum flow can occur in the same year and in the same catchment. In addition, drivers of these floods of different magnitudes can be quite different from annual maximum floods. Event-based approach which extracts the sample using peaks-over-threshold is more robust to assess the importance of flood generating mechanisms 13 , 26 . Therefore, further investigation is performed using Event Coincidence Analysis (ECA), which tests for possible causal influence of flood drivers in triggering the flood events of little lower magnitude than annual floods. Triggering effect is evaluated using the statistics based on trigger coincidences for the condition that extreme rainfall, soil moisture and baseflow are followed by the flood events. ECA results are used to find the dominant driver which has a higher influence on the flood events in Peninsular catchments. | Methods
Datasets
Daily streamflow time series for 70 catchments in six major river basins of Peninsular India (Fig. 1 a) are obtained from India Water Resources Information System ( https://indiawris.gov.in/wris/ ). Gaps in the daily streamflow records are filled using time series methods for synthesizing missing streamflow records 41 , 42 . Daily high resolution rainfall 43 dataset on a grid size of 0.25° is obtained from India Meteorological Department (IMD). European Space Agency Climate Change Initiative (ESA CCI) soil moisture 44 – 46 with daily temporal and 0.25° spatial resolution is used in this study. High resolution Aridity index values are obtained from Global Aridity Index and Potential Evapotranspiration Database—Version 3 (Global-AI_PET_v3) 47 . Information on location, year of construction, capacity and purpose of reservoirs is collected from Central Water Commission (CWC) report on National Register of Large Dams 48 . Catchments are delineated in Quantum Geographic Information System (QGIS) using Digital Elevation Model (DEM) obtained from Shuttle Radar Topographic Mission (SRTM) at 30 m spatial resolution ( https://srtm.csi.cgiar.org/srtmdata/ ). Catchment average rainfall, soil moisture and aridity index are calculated across the selected catchments of Peninsular India. ESA CCI daily soil moisture (COMBINED) data product is available from 1978; therefore a common period of 40 years (1979–2018) is selected based on the availability of data for all the variables.
Baseflow separation
George and Sekhar 49 finds Ekhardt filter 50 more suitable compared to other digital filters for baseflow separation in Kabini basin, a tributary of Cauvery river in Western Ghats, India. Therefore, Ekhardt filter, a two-parameter recursive filter is used to estimate baseflow in the study area. The filter equation is given by where α is the recession constant and is the maximum baseflow index modelled by the algorithm, is the baseflow, and is discharge for time step . Peninsular catchments are underlain by hard rock aquifers; therefore is selected. Recession constant is computed based on the master recession curve (MRC) method described in WMO manual on low-flow estimation and prediction 51 . The beginning of recession is marked below the threshold at least two days after the peak flood discharge. Segment length is computed for each catchment and MRC is obtained by plotting pairs of and . Recession constant α is estimated as the slope of the curve. The procedure is illustrated for a randomly selected Haralahalli catchment of Krishna river basin in Fig. S5 .
Trend estimation
Trends in annual maximum streamflow are detected using Mann–Kendall trend test 52 and the slope of linear trends in Peninsular catchments is computed using Sen-Theil slope estimator 53 . In order to facilitate a relative comparison of trends across catchments of different sizes, they are expressed in units of percentage change per decade following previous studies 54 – 56 , such that where is the trend in %/decade, is the Sen’s slope and is the mean of annual maximum streamflow time series. Decadal trends in the flood drivers are estimated similarly.
Extracting flood characteristics
Annual flood peak, volume and duration series are extracted using the procedure followed in previous studies 57 – 60 . Annual maxima of streamflow data is the peak flow. Baseflow is used as the criterion to delineate flood hydrograph and derive flood volume and flood duration. Start day of flood runoff is marked as the abrupt rise in discharge (above the baseflow) and flattening of the recession limb (a return to baseflow) is the end day as shown in Fig. 3 b. Flood duration for the selected year is Flood volume for i th year with observed streamflow on j th day is computed as follows where and are the observed daily streamflows on the start and end dates of flood runoff, respectively. In this study, a flood event is defined as the upper part of the hydrograph lying above the fixed threshold as described by Karmakar and Simonovic 59 . Flood duration is and flood volume is estimated for the threshold discharge after deducting the baseflow volume.
Quantifying the impact of flow regulations
Human activities like construction of reservoirs significantly affect the hydrological system by disturbing the natural flow conditions. Paired-catchment approach is the classical method in catchment hydrology to detect the impact of a disturbance on the flow regime 34 , 61 , 62 . The flow regimes of two nearby catchments with similar physical characteristics are compared in this method by setting one as a benchmark catchment and other as a disturbed catchment. Indian catchments are bigger in size and it is difficult to find adequate number of pairs with the presence of a large number of hydraulic structures in a single catchment. Therefore, the “ pre-post-disturbance ” approach which compares hydrologic extremes before and after a disturbance is used in this study to quantify the impact of human influence. A minimum of 15 years and an optimum of 20 years for each part are required such that the normal, dry and wet years within each period are equally distributed 63 , 64 . Stream networks are delineated for the Peninsular river basins and locations of dams are marked on the network. The streamflow gauging stations which lie downstream of the dams on same flowlines are identified. A comprehensive analysis on changes in flood characteristics (peak, volume and duration) is conducted by dividing the streamflow records into two parts: the undisturbed period and the disturbed period. Length of streamflow records varies between a maximum of 52 years (1967–2018) to a minimum of 40 years (1979–2018) for quantifying the changes in flood characteristics. For a robust assessment of changes only the structures constructed after 1980 are considered so that a good length of records is available before a disturbance. Changes in the flood characteristics are estimated as , where and are the mean characteristics after disturbance and before the disturbance, respectively.
Event coincidence analysis
Event Coincidence Analysis (ECA) is a recently developed statistical tool exclusively designed for measuring the strength, directionality and time lag of statistical interdependency between two event series 65 , 66 . Donges et al. 67 used the ECA framework to investigate the role of floods as triggers of epidemic outbreaks with country-level observational data. Manoj et al. 68 employed ECA to identify and quantify the preconditioning of precipitation extremes by soil moisture anomalies over India. ECA is suitable to test for existence, direction and significance of possible relationship between pairs of two event series 68 , 69 and . ECA is utilized in this study to test for existence and significance of statistical interrelationship of floods with the flood drivers.
Let be flood events occurring at timings and be the flood drivers (rainfall, soil moisture and baseflow) occurring at times and are the number of events of event series and respectively. The event series are assumed to cover a time interval with length , such that and which yields the event rates and Coincidences of events in both the series are counted and the strength of statistical interrelationship is quantified using a measure called “ Trigger Coincidence Rate ” . It measures the fraction of -type events that are followed by at least one -type event. Multiple -type events within the coincidence interval are counted only once.
Trigger coincidence rate 67 is defined as where is the coincidence interval and is the time lag parameter. An instantaneous coincidence occurs if events of two event series occur closer in time i.e. if the condition is satisfied. A lagged coincidence occurs when the events shifted by time lag i.e. at time coincide with the -type event and the condition holds. denotes the Heaviside function which conveys information on whether the flood drivers have a triggering effect on flood events or not. The values of vary between 0 (complete absence of triggering effect between and ) and 1 ( events succeed all the events).
Testing the significance of coincidences
The two event series are assumed to be randomly distributed and mutually independent over the continuous time interval . The occurrences of coincidences are rare and thus and event time series are treated as two independent Poisson processes. This allows derivation of distributions of coincidence rates to test the statistical significance of ECA results. The probability of occurrence of a given number of trigger coincidences between two event series can be approximated by Binomial distribution 66 where is the temporal tolerance and is the time lag between and . Significance test for coincidence measure is based on the null hypothesis that the number of coincidences can be explained by two independent series of randomly distributed events. The -value of empirically observed number of coincidences with respect to the test distribution in Eq. ( 4 ) i.e. the probability to obtain a number of coincidences equal to or greater than is given by . Null hypothesis is rejected if the -value is smaller than the defined confidence level α. | Results
Trend analysis
Annual flood magnitudes have decreased in Peninsular catchments over the period 1979–2018 (Fig. 2 a). An increase in flood magnitudes is observed only in two catchments (Kurubhata and Kantamal) of Mahanadi river basin and one catchment (Dameracherla) of Godavari river basin. Flood magnitudes are declining drastically in Narmada river basin. Trends in flood magnitudes show a strong association with trends in annual mean baseflow (Fig. 2 b). Signs of flood magnitude trends are more consistent with the signs of trends in baseflow compared to rainfall and soil moisture. The strength of dependence between trends in flood magnitudes and trends in flood drivers is summarized with Kendall rank correlation coefficient. A high value of Kendall’s is observed for the pairs of trends in annual flood magnitude and trends in mean annual baseflow across all the Peninsular catchments. Trends in annual maximum daily rainfall ( ) and annual mean soil moisture ( ) show a weak correlation with trends in flood magnitudes. These results suggest that floods in Peninsular catchments are strongly correlated with baseflow compared to rainfall and soil moisture.
Effect of flow regulations on flood characteristics
The impacts of reservoir flow regulations in Peninsular catchments are assessed using “ pre-post-disturbance ” approach in this study. Locations of dams are marked on the streamflow network and the streamflow gauges downstream of these dams on the same flow lines are identified (Fig. 3 a). Total 31 dams are considered which came after 1980 as per the information from National Register of Large Dams (NRLD) and 25 streamflow gauges are marked which have a good length of flow records available for pre-disturbance period. When there is more than one reservoir upstream of a stream gauge, impact is evaluated for the structure which came first and year of construction decides the length of records for the new dam downstream of the old dam. The pre-dam construction period of new dam begins from the year of construction of the older dam lying upstream and ends at its own year of construction. Annual flood peak, volume and duration are computed for pre- and post-dam construction period as illustrated in Fig. 3 b. The comparison of mean flood characteristics for the two periods shows that reservoir regulations have strong influence on flood characteristics. Reservoir regulation has increased the flood duration by up to 65% while it has reduced the peak flow and flood volume by ~ 48.5% and ~ 50%, respectively. Floods after the construction of dams last longer but are less severe with reduced peak and volume in Peninsular catchments. These impacts are independent of the purpose of reservoirs. Upper Wardha dam which serves the purpose of flood control along with irrigation and water supply shows reduction in all the flood characteristics (peak − 21.5%, volume − 32.1% and duration − 14%). Flood alleviation effect of reservoirs is observed in different parts of the world 31 , 34 , 36 . Reduction in flood severity indicates a positive effect of dam construction on flood alleviation in Peninsular India.
Importance of flood drivers
It is well established in literature that antecedent soil moisture conditions play an important role in the hydrological response of a catchment 11 , 12 , 16 , 37 . However, importance of antecedent conditions may extend deeper into the saturated zone as revealed in a recent study by Berghuijs and Slater 29 . Here, we investigated the association of annual floods and flood drivers (baseflow, rainfall and soil moisture) with Pearson correlation coefficient for a range of antecedent periods from 1 to 14 days (Fig. 4 a). Instantaneous values of baseflow and soil moisture are used, whereas accumulated rainfall from a specific lag to the flooding day are used in the correlation analysis. Baseflow shows a strong correlation with annual maximum flows at all the time lags compared to soil moisture. Baseflow dominates for the first few lags i.e. less than 3–4 days and rainfall dominates at longer antecedent periods for 50 catchments. For remaining 20 catchments (especially from Cauvery river basin), baseflow dominates for more than 5–7 days and rainfall dominates for more longer antecedent periods as shown in Fig. S2 . A catchment with higher baseflow reflects more wet conditions, which means the chances of rapid runoff are high with the incoming rainfall event. On the other hand, the correlation between accumulated rainfall and flow peaks is relatively high at longer time lags because rainfall not only drives the flood peak but it also contributes to soil moisture and groundwater levels. Accumulated rainfall over a longer period will eventually raise the baseflows and thus more water will be contributed to river flows. Results are shown for a randomly selected catchment Bamini with semi-arid climate in Godavari river basin. Similar correlation pattern is observed across 50 catchments of Peninsular India. At short time scales, flood magnitudes are strongly associated to baseflow than rainfall and soil moisture. This observation suggests that baseflow contributes more water to the Peninsular catchments.
The spatial pattern of correlation between flood magnitudes and baseflow computed 5 days before the flood events is shown in Fig. 4 b. High positive correlation is observed across all the catchments indicating a strong association of baseflow with floods. This highlights the strong influence of baseflow on floods in Peninsular catchments. Relative contribution of baseflow to peak flows is further investigated using Baseflow Index (BFI) on the flooding day. This will help in quantifying the fraction of peak flow which comes from baseflow. Negative correlation is observed between flood magnitudes and BFI (Fig. 4 c). This suggests that although baseflow contributes more to river flows (Fig. 4 b); however, its contribution to the event flow magnitude decreases as surface runoff contributes a higher fraction of flood discharge than baseflow. Additionally, a flood event cannot occur without high rainfall even if the landscape has higher baseflow.
The importance of flood drivers is evaluated using the trigger coincidence rate for the period 1979–2018. The p-values are lower than the confidence level , therefore the null hypothesis of independent random series is rejected for all the catchments. All the trigger coincidence rates are statistically significant. Trigger coincidence rates are high for baseflow compared to rainfall and soil moisture across all the Peninsular catchments (Fig. 5 ). Baseflow 1 day prior to the flood event is significantly influencing the river floods, whereas soil moisture on the previous day has lowest triggering effect on floods (Fig. 5 a–c). This suggests that high baseflow conditions are coinciding more with the severe 95th percentile flood events. The triggering effect of baseflow is longer-lasting as the trigger coincidences remain high compared to other two flood drivers at a time lag of 5 days (Fig. 5 d–f). Baseflow on the previous day of flood has a higher trigeering effect than longer time lag. ECA results corroborate the statement that baseflow significantly contributes to floods in Peninsular India.
The role of flood drivers may change with time; therefore, investigation is carried out for two periods to better understand the evolving nature of floods and their association with flood drivers. Data is divided into two equal halves from 1979–1998 to 1999–2018. Triggering effect of flood drivers is assessed for 85th as well as 95th percentile extremes to understand the influence on floods of different magnitudes (Figs. S3 – S4 ). Baseflow has high trigger coincidence rates compared to rainfall and soil moisture irrespective of the flood magnitude and record length. These findings suggest that baseflow plays a critical role in controlling floods in Peninsular India and future floods depend on the pre-existing baseflow conditions during high rainfall events. | Conclusions
The active role of groundwater in storm runoff in streams is discovered decades ago 38 , 39 . Baseflow also exerts significant influence over the entire flood frequency curves 40 . Despite the significant role of groundwater in storm runoff generation, groundwater is rarely considered in flood related studies. Recent studies consider the critical role of soil moisture in modulating floods 2 , 3 , 11 – 13 , whereas groundwater is often overlooked. Present study extends our knowledge on process based controls on floods. Our analysis reveals that pre-existing baseflow conditions play an important role in driving floods. Baseflow is the dominant driver of floods at shorter time lags and rainfall controls flood magnitudes at longer antecedent periods. The effect of baseflow is stronger than soil moisture and lasts for longer antecedent periods in Peninsular catchments.
Presence of reservoir in a catchment significantly influences the natural flow regime by storages and releases. Reservoir regulation has reduced flood severity i.e. flood peak and flood volume but duration of flood events has increased after the construction of dams in Peninsular catchments. This attenuation in flood severity is independent of the purpose of reservoirs. A reduction in flood peak and volume is achieved due to the retention of water in the reservoir and by releasing this excess water over longer durations. Reservoir regulation has positive effect by alleviating the flood severity in Peninsular India.
One potential limitation of present study is that we identified single dominant mechanism of floods in the catchments. However, floods in a catchment can arise through a combination of different mechanisms. Present analysis can be further extended by conditioning on the combination of flood drivers using multivariate statistical tools to accurately estimate their combined effect on river flooding. Incorporating more information on the flood generating mechanisms is the key for improving flood predictions and plan better preventive measures. | Extreme rainfall prior to a flood event is often a necessary condition for its occurrence; however, rainfall alone is not always an indicator of flood severity. Antecedent wetness condition of a catchment is another important factor which strongly influences the flood magnitudes. The key role of soil moisture in driving floods is widely recognized; however, antecedent conditions of deeper saturated zone may contribute to river floods. Here, we assess how closely the flood magnitudes are associated to extreme rainfall, soil moisture and baseflow in 70 catchments of Peninsular India for the period 1979–2018. Annual flood magnitudes have declined across most of the catchments. Effect of flow regulations is also assessed to understand the impact of human interventions on flood characteristics. Reservoir regulation has positive effect by reducing the flood peak and volume, whereas the duration of flood events has increased after the construction of dams. Baseflow exhibits similar patterns of trends as floods, whereas trends in rainfall and soil moisture extremes are weakly correlated with trends in flood magnitudes. Baseflow is found to be more strongly influencing the flood magnitudes than soil moisture at various time lags. Further analysis with event coincidence analysis confirms that baseflow has stronger triggering effect on river floods in Peninsular India.
Subject terms | Study area
Narmada, Tapi, Mahanadi, Godavari, Krishna and Cauvery are six major river basins of Peninsular India. Tapi is the smallest river basin with an area of 65,145 km 2 and Godavari is the largest which covers an area of 312,812 km 2 . Narmada and Tapi are west flowing rivers which join the Arabian Sea, while other four are east flowing rivers which drain into the Bay of Bengal. Godavari is the longest river of length 1465 km and Cauvery is the shortest river with a length of 560 km in Peninsular India. The locations of 70 selected catchments are shown in Fig. 1 a. The catchment areas vary in size from 1260 to 307,800 km 2 (Fig. S1 ). The elevation varies between a minimum of 1 m to a maximum of 937 m (Fig. 1 b). Spatial variation of mean annual maximum runoff rate averaged over a period of 40 years is shown in Fig. 1 b. High runoff rates are observed in Narmada, lower and middle Mahanadi, Krishna upper sub-basin, Tungabhadra upper sub-basin and Cauvery upper sub-basin. Aridity Index (AI) defined as the ratio of mean annual precipitation to mean annual potential evapotranspiration is shown in Fig. 1 c. United Nations Environment Programme (UNEP) 35 provides a climate classification scheme based on the Aridity Index values. Peninsular catchments have semi-arid (AI 0.2–0.5), dry sub-humid (AI 0.5–0.65) and humid (AI > 0.65) climate conditions. Spatial variation of Baseflow Index (BFI), the long-term ratio between baseflow to total streamflow is shown in Fig. 1 d. The distribution of BFI is relatively even with BFI values between 0.25 and 0.50 for most of the catchments.
Supplementary Information
| Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-024-51850-w.
Acknowledgements
Funding received from the Ministry of Earth Sciences (MoES), Government of India, through the project “Advanced Research in Hydrology and Knowledge Dissemination”, Project no.: MOES/PAMC/H&C/41/2013-PC-II is greatly acknowledged.
Author contributions
First author, S.S. conceptualized the problem, collected and processed the data, performed the entire analysis in R programming language, prepared the first draft of the manuscript and revised it. Second author, P.P.M. played a supervisory role, helped in conceptualization of the problem and preparing the final version of the manuscript.
Data availability
India-Water Resources Information System (India-WRIS) daily streamflow data used in this study is available at https://indiawris.gov.in/wris/#/RiverMonitoring . India Meteorological Department (IMD) provides daily high resolution gridded rainfall data can be accessed from https://www.imdpune.gov.in/cmpg/Griddata/Rainfall_25_NetCDF.html . European Space Agency Climate Change Initiative (ESA CCI) soil moisture dataset with daily temporal and 0.25° spatial resolution is available at https://esa-soilmoisture-cci.org/data . Global Aridity Index is available at 10.6084/m9.figshare.7504448.v5. Shuttle Radar Topographic Mission (SRTM) at 30 m spatial resolution is available at https://srtm.csi.cgiar.org/srtmdata/ .
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:58 | Sci Rep. 2024 Jan 13; 14:1251 | oa_package/79/5e/PMC10787776.tar.gz |
|
PMC10787777 | 38218922 | Introduction
Cancer cells face heightened proteotoxic stress compared to their normal counterparts due to various factors, including the production of mutated proteins, the upregulation of multiprotein complex components induced by aneuploidy [ 1 ], and increased protein synthesis driven by oncogenic activation [ 2 ]. As a result, the survival of cancer cells relies heavily on the intricate machinery responsible for alleviating proteotoxic stress and maintaining proteostasis. This machinery encompasses coordinated processes like protein synthesis, folding, processing, and degradation [ 2 , 3 ]. One pivotal player in these proteostasis-associated processes is Valosin-containing protein (VCP/p97), a hexameric AAA+ ATPase [ 4 ]. VCP plays a significant role in diverse cellular functions, including endoplasmic reticulum (ER)-associated degradation (ERAD) [ 5 ], mitochondrial-associated degradation (MAD) [ 6 ], and the ubiquitin-proteasome system (UPS) [ 7 ]. Notably, VCP is frequently overexpressed in various cancer types and holds promise as both a cancer prognostic biomarker and therapeutic target [ 8 , 9 ]. However, the precise mechanisms by which VCP inhibition selectively eradicates cancer cells while sparing non-cancerous cells have remained elusive.
In this study, we present evidence that VCP inhibition preferentially induces cytotoxicity in breast cancer cells when compared to non-transformed cells, primarily through the induction of paraptosis. Paraptosis is a non-apoptotic cell death mechanism characterized by cytoplasmic vacuolation originating from the ER and/or mitochondria [ 10 ]. Since apoptotic pathways are often compromised in drug-resistant cancer cells, leading to therapeutic failures [ 11 ], it becomes imperative to explore strategies that promote alternative cell death mechanisms, such as paraptosis, especially in tumors that have progressed despite conventional apoptosis-targeted therapies. A comprehensive understanding of the mechanistic details of cancer cell death is crucial for devising effective therapeutic strategies. Importantly, paraptosis differs from apoptosis in that it does not involve the release of mitochondrial cytochrome c or caspase activation. While various factors, including proteasome inhibition [ 12 – 14 ] and thiol proteostasis impairment [ 15 , 16 ], and Ca 2+ imbalance [ 17 ], have all been implicated in paraptosis, the detailed molecular basis remains to be fully elucidated.
Our study has illuminated the pivotal role of VCP as a molecular target for inducing paraptosis in cancer cells. Mechanistically, VCP inhibition in breast cancer cells intensifies proteotoxic stress by restoring translation, thereby contributing to the occurrence of paraptosis. This process involves the activating transcription factor 4 (ATF4)/DNA damage-inducible transcript 4 (DDIT4) axis and the mechanistic target of rapamycin complex 2 (mTORC2)/Akt signaling pathways, which play a significant role in translational recovery and subsequent amplification of proteotoxic stress. Furthermore, we emphasize the critical role of eukaryotic translation initiation factor 3 subunit D (eIF3d) as a mediator for translational recovery in cancer cells exposed to proteotoxic stress. In contrast, when VCP is inhibited in non-transformed cells, it triggers translational suppression, ultimately alleviating proteotoxic stress and promoting cell survival. Considering the crucial role of hyperactive Akt, driven by oncogenes, in cancer cell survival and resistance to therapy [ 18 ], identifying vulnerabilities within specific subsets of cancer cells can pave the way for tailored therapies targeting oncogene-addicted cancer cells.
In summary, our work suggests that inducing paraptosis through VCP inhibition may open up novel therapeutic avenues for cancer cells characterized by hyperactive Akt. | Materials and methods
Chemicals and antibodies
Chemicals were purchased from various sources: eeyarestatin-1 (Eer1), LY294002, PD98059, U0126, SP600125, and SB203580 from Calbiochem (EDM Millipore Corp., Billerica, MA, USA); CB-5083 from Biovision (Milpitas, California, USA); NMS-873 from APExBIO (Houston, TX 77014, USA); z-VAD-fmk from R&D Systems (Minneapolis, MN, USA); Necrostatin-1 (Nec-1), 3-methyladenine (3-MA), bafilomycin A1 (Bafilo), chloroquine (CQ), ferrostatin-1 (Ferro), and cycloheximide (CHX) from Sigma-Aldrich (St. Louis, MO, USA); PP242 and Torin1 from Selleckchem (Houston, TX 77014, USA); TRAIL from KOMA BIOTECH (Seoul, South Korea); MitoTracker-Red (MTR), tetramethylrhodamine methyl ester (TMRM), 4′,6-diamidino-2-phenylindole (DAPI), and propidium iodide (PI) from Molecular Probes (Eugene, OR, USA). The following antibodies were employed: VCP (#2648), GFP (#2555), p-eIF2α (#9721), eIF2α (#9722), CHOP (#2895), Nrf1(#8052), p-ERK1/2 (#9101), ERK (#9102), p-Akt (S473) (#9271), Akt (#9272), p-Akt (T308) (#9275), p-p70S6K (#9234), p70S6K (#2708), p-4EBP1 (#9451), 4EBP1 (#9452), Raptor (#2280), Rictor (#2114), and ATF4 (#11815) from Cell Signaling Technology (Danvers, MA, USA); β-actin (sc-47778), cytochrome C (sc-13156), Tom20 (sc-11415), ubiquitin (sc-8017), ATF4 (sc-200), and Mcl-1 (sc-819) from Santa Cruz (Dallas, TX, USA); α-Puromycin (MABE343) from Millipore (Billerica, MA, USA); Calnexin (CNX; PA5-19169) from Invitrogen (Carlsbad, CA, USA); Tim 23 (611222) from BD biotechnology (San Jose, CA, USA); Caspase-3 (ADI-AAP-113) from Enzo Life Sciences (Farmingdale, NY, USA); poly (ADP-ribose) polymerase (PARP; ab32071) and Bap31 (ab37120) from Abcam (Cambridge, UK); Ras (clone RAS10, #05-516) from Millipore; The secondary antibodies used were anti-rabbit IgG HRP (G-21234) and anti-mouse IgG HRP (G-21040) from Molecular Probes, Inc. (Eugene, OR, USA), and anti-rat IgG HRP from Sigma (A9037-1).
Cell culture
Human breast cancer cell lines, the MCF10A human mammary epithelial cell line, and HEK-293T cells were acquired from the American Type Culture Collection (ATCC, Manassas, VA, USA). All cell lines underwent regular mycoplasma contamination checks, and their authenticity was confirmed through standard morphological examination using a microscope. The cell cultures were as follows: MDA-MB 231 and BT549 cells in RPMI-1640 medium (GIBCO-BRL, Grand Island, NY, USA); T47D and MDA-MB 468 cells in DMEM with high glucose (Hyclone, Logan, UT, USA); MDA-MB 435 S cells in DMEM with low glucose (Hyclone); Hs578T cells in DMEM high-glucose medium supplemented with 10 μg/ml insulin (Sigma-Aldrich, St. Louis, MO, USA); and MCF10A cells in DMEM/F12 medium supplemented with 5% horse serum, insulin, human epidermal growth factor, hydrocortisone, and cholera toxin (Calbiochem).
Cell viability assay
All experiments were conducted in a low-glucose DMEM medium to exclude the effects of high glucose concentrations. Cells were cultured in 24-well plates (4×10 4 cells per well), treated as indicated, fixed with methanol/acetone (1:1) at −20 °C for 5 min, washed with PBS, and stained with 1 μg/ml propidium iodide at room temperature for 10 min. Plates were imaged using an IncuCyte device (Essen Bioscience, Ann Arbor, MI, USA) and analyzed with IncuCyte ZOOM 2016B software. The IncuCyte program’s processing definition was set to identify attached (live) cells by their red-stained nuclei. The percentage of live cells was normalized to that of untreated control cells (100%).
Immunoblot analysis
Immunoblot analysis was performed as described previously [ 33 ]. Representative results from at least three independent experiments are displayed, and unprocessed scans of immunoblots are provided as Source Data.
Immunofluorescence microscopy
Following treatments, cells were fixed with acetone/methanol (1:1) for 5 min at −20 °C or with 4% paraformaldehyde for 10 min at room temperature. Fixed cells were blocked in 5% BSA in PBS for 30 min and incubated overnight at 4 °C with primary antibodies [BAP31 (rabbit, ab37120 from Abcam), Tim23 (mouse, 611222 from BD), CNX (goat, PA5-19169 from Invitrogen), cytochrome c (mouse, sc-13156 from Santa Cruz), or Tom20 (mouse, sc-17764 from Santa Cruz)] diluted (1:500) in blocking buffer. Cells were then washed and incubated with diluted (1:1000) anti-mouse or anti-rabbit Alexa Fluor 488 or 594 (Molecular Probes) for 1 h at room temperature. After mounting on slides with ProLong Gold antifade mounting reagent (Molecular Probes), cells were observed with a K1-Fluo confocal laser scanning microscope (Nanoscope Systems, Daejeon, Korea) using an appropriate filter set (excitation bandpass, 488 nm; emission bandpass, 525/50).
Transmission electron microscopy
Cells were pre-fixed in Karnovsky’s solution (1% paraformaldehyde, 2% glutaraldehyde, 2 mM calcium chloride, 0.1 M cacodylate buffer, pH 7.4) for 2 h, post-fixed in 1% osmium tetroxide and 1.5% potassium ferrocyanide for 1 h, dehydrated with 50–100% alcohol, embedded in Poly/Bed 812 resin (Pelco, Redding, CA, USA), polymerized, and observed under an electron microscope (EM 902 A, Carl Zeiss, Oberkochen, Germany).
Mouse xenograft studies
Animal experiments adhered to the guidelines and regulations approved by the Institutional Animal Care and Use Committees of Asan Institute for Life Science (approval number 2017-12-091, granted on May 02, 2017). Female BALB/c nude mice (nu/nu, 5 weeks old; Japan SLC, Hamamatsu, Japan) were injected in the right flank with MDA-MB 435 S cells (5 × 10 6 cells/mouse). Tumors were allowed to grow for 3 weeks until the average tumor volume reached 100–150 mm 3 . Mice were randomized into three groups ( n = 5 per group) and received oral administration (O.A.; qd4/3off) of vehicle (PBS containing 0.25% DMSO), 100 mg/kg CB-5083, or 150 mg/kg CB-5083. Researchers were blinded to the group allocations during the experiment and when assessing the outcome. Tumor size was measured twice a week for 2 weeks, and tumor volume was calculated. On the 15 th day, mice were sacrificed, and the tumors were isolated, fixed in 4% paraformaldehyde, and embedded in paraffin. Tissue sections stained with H&E were observed under a K1-Fluo microscope (Nanoscope Systems) and photographed using a complementary metal-oxide-semiconductor (CMOS) camera.
Construction of plasmids encoding mCherry-VCP WT and mCherry-VCP QQ
mCherry-VCP WT and mCherry-VCP QQ were generated from the plasmids VCP (wt)-AdditionEGFP (#23971) and VCP(DK0)-EGFP (VCP QQ) (#23974) (Addgene, Watertown, MA, USA), respectively, using the pENTRY/pDEST-mCherry system (Invitrogen). The fragments encoding VCP WT and VCP QQ were PCR amplified using the following primers: forward (ATGGCTTCTGGAGCCGATTCA) and reverse (GCCATACAGGTCATCVATCATT). These fragments were used to generate the pENTRY-VCP WT and pENTRY-VCP QQ vectors. Subsequently, mCherry-VCP WT and mCherry-VCP QQ were generated by recombining the pENTRY-VCP WT or pENTRY-VCP QQ vector with a pCS-mCherry vector utilizing the Gateway LR cloning system from Invitrogen.
Generation and preparation of recombinant adenoviruses expressing VCP WT-EGFP and VCP QQ-EGFP
Replication-incompetent adenovirus expressing VCP WT-EGFP or VCP QQ-EGFP were generated as described previously [ 74 , 75 ]. The DNA fragment encoding the VCP WT-EGFP- or VCP QQ-EGFP was excised from the respective plasmids (VCP WT-EGFP (#23971) and VCP(DKO)-EGFP (#23974), Addgene) using BamH 1 and Bgl II restriction enzymes. These fragments were then ligated with the BamH 1-digested adenoviral shuttle vector, pCA14. The resulting constructs, pCA14/VCP WT-EGFP and pCA14/VCP QQ-EGFP, were linearized by Pvu I digestion. The E1/E3-deleted adenoviral vector, dE1-RGD, was also linearized by BstB I digestion. These linearized vectors were co-transformed into E. coli BJ5183 competent cells for homologous recombination. The resulting adenoviral plasmids, dE1/VCP WT-EGFP and dE1/VCP QQ-EGFP, were digested with Pac I and transfected into 293 A cells. Finally, adenoviruses expressing VCP WT-EGFP or VCP QQ-EGFP were propagated, amplified in 293 A cells, and purified using cesium chloride density gradient centrifugation.
Small interfering RNA-mediated gene silencing
siRNA Negative Control (siNC) (Stealth RNAi TM , 12935300) was purchased from Invitrogen (Carlsbad, CA, USA). VCP-targeted siRNAs were acquired from QIAGEN (Hilden Düsseldorf, NRW, Germany). These included siVCP #1 (target sequence AACAGCCATTCTCAAACAGAA), siVCP #2 (target sequence ATCCGTCGAGATCACTTTGAA), and siVCP #3 (target sequence AAGATGGATCTCATTGACCTA). CHOP ( DDIT3 ) targeted siRNA (target sequence GAGCUCUGAUUGACCGAAUGGUGAA) was synthesized by Invitrogen. siATF4 (target sequences: CCACUCCAGAUCAUUCCUU, GGAUAUCACUGAAGGAGAU, and GUGAGAAACUGGAUAAGAA, sc-35112) was obtained from Santa Cruz. The siRNA oligonucleotides were annealed and transfected into cells using the RNAiMAX reagent (Invitrogen) following the manufacturer’s instructions. Western blotting was performed to confirm successful siRNA-mediated knockdown.
Lentivirus-mediated shRNA transduction
To generate the lentiviral vectors encoding short hairpin RNA (shRNA), the pLKO.1 neo plasmid (#13425: Addgene, Cambridge, MA, USA) was digested using Age I and EcoR I. Two oligonucleotide strands were mixed and incubated at 95 °C for 4 min, and then at 70 °C for 10 min before slowly cooling to room temperature. The annealed oligo pair was ligated into the digested pLKO.1 neo plasmid using T4 ligase at 20 °C for 16 h. The sequences of the oligonucleotides used to knock down each target gene are listed in Supplementary Table 1 . To produce the lentivirus containing each plasmid, HEK-293T cells were transfected with the lentiviral vector in the presence of pMD2.G/psPAX2.0 using linear polyethyleneimine (MW2,500; Polysciences, Warrington, PA, USA). Following transfection, the virus-containing supernatants were filtered, combined with polybrene, and used to infect MDA-MB 435 S cells. qRT-PCR and Western blot analyses were performed to validate the efficiency of transfection. The sequences of the shRNA are provided in Supplementary Table 1 .
Quantitative Real-Time RT-PCR (qRT-PCR)
Total RNA was extracted using the TRIzol® reagent (Invitrogen). Subsequently, cDNA was synthesized using 1 μg of total RNA with the M-MLV cDNA Synthesis kit (EZ006S; Enzynomics, Daejeon, Korea). Quantitative real-time polymerase chain reaction (qRT-PCR) was conducted using a Bio-Rad Real-Time PCR System (Bio-Rad, Richmond, CA, USA). The results were analyzed using the 2 –ΔΔCt method [ 76 ]. Primers for qRT-PCR are listed in Supplementary Table 2 .
Establishment of MCF10A cell lines stably expressing HRas G12V and KRas G12V
To establish cell lines expressing HRas G12V and KRas G12V , GP2-293 packaging cells were co-transfected with pVSV-G (#631530: Clontech, Mountain View, CA, USA) along with either pBABE-puro, pBABE puro H-Ras V12, or pBABE puro K-Ras V12 (#9051, #9052, or #1764: Addgene) using a CalPhosTM Mammalian Transfection Kit (#631312, Clontech) following the manufacturer’s instructions. Retroviral supernatants were used to transduce MCF10A cells in the presence of polybrene (5 mg/mL; Millipore, Burlington, MA, USA). Transduced cells were selected with puromycin (Invivogen, San Diego, CA, USA) for 3 weeks. Selected single cells were isolated, and the expression of HRas G12V and KRas G12V was confirmed by Western blotting.
Morphological examination of ER and mitochondria
Cell lines stably expressing fluorescence in the ER lumen (YFP-ER cells), ER membrane (Sec61β-GFP cells), or mitochondria (YFP-Mito cells) [ 16 ] were used for morphological studies. YFP-ER cells were stained with 100 nM MitoTracker-Red (MTR) for 10 min to observe both the ER and mitochondria. Confocal microscopy was performed using a K1-Fluo confocal laser scanning microscope (Nanoscope Systems, Daejeon, Korea) with an appropriate filter set (excitation bandpass, 488 nm; emission bandpass, 525/50).
Analysis of protein synthesis by puromycin labeling
Protein synthesis was monitored using the SUnSET method [ 40 ]. Briefly, newly synthesized peptides in cultured cells were labeled by adding 10 μg/ml puromycin for 10 min before cell collection. Whole-cell extracts were prepared for Western blotting using an anti-puromycin antibody (Millipore) and anti-mouse IgG-HRP-linked antibody (Molecular Probes). Fold changes in the protein levels of interest compared to β-actin were calculated following densitometric analysis.
Statistical analysis
All experiments were repeated at least three times. Data were presented as mean ± standard deviation. Statistical analysis was performed using GraphPad Prism 9 (Graph Pad Software Inc, San Diego, CA, USA). The normality of data was assessed using Kolmogorov–Smirnov tests, and equal variance was assessed using Bartlett’s test. For normally distributed data, statistical differences were determined using analysis of variance (ANOVA), followed by the Bonferroni multiple comparison test. For all tests, p < 0.05 was considered significant (ns not significant, * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001). | Results
VCP is a molecular target of paraptosis
While various natural products and chemicals have been shown to induce paraptosis [ 19 , 20 ], the molecular mechanisms underlying this process remain unclear. To identify potential molecular targets for inducing paraptosis, we utilized the Connectivity Map (CMap, http://cleu.io/cmap ) [ 21 ], a database that links pharmacological drugs and genomic data. The CMap dataset includes transcriptome information for 17 paraptosis-inducing chemicals, such as withaferin A [ 22 ], pyrrolidine dithiocarbamate (PDTC) [ 23 ], 15-deoxy-Δ 12,14 -prostaglandin J 2 (15d-PGJ 2 ) [ 24 ], and xanthohumol [ 25 ]. We sought genetic perturbations that elicited transcriptional alterations similar to those induced by these paraptosis-inducing chemicals. VCP knockdown emerged as the top-ranked perturbagen, with actions resembling those of the examined paraptosis inducers (Supplementary Fig. 1 ).
Given VCP’s crucial role in proteostasis [ 4 – 7 ] and the established connection between proteostatic disruption and paraptosis [ 13 , 16 , 19 , 26 – 28 ], our investigation aimed to determine whether VCP knockdown alone could trigger paraptosis. We observed that VCP knockdown, executed using three independent siRNAs, led to cell death accompanied by extensive vacuolation in MDA-MB 435 S cells (Fig. 1a, b ). Similar results were obtained through the adenovirus-mediated expression of a dominant-negative VCP mutant (VCP QQ; VCP E305Q, E578Q ) [ 29 ] fused to an enhanced green fluorescent protein (EGFP) (Fig. 1c, d ) or treatments with various VCP inhibitors, including eeyarestatin-1 (Eer1, an ER membrane-binding domain-containing VCP inhibitor) [ 30 ], CB-5083 (an inhibitor of the D2 ATPase domain of VCP) [ 31 ], and NMS-873 (an allosteric VCP inhibitor) [ 32 ] (Fig. 1e, f ). Next, we explored whether VCP inhibition induces vacuolation originating from the ER and/or mitochondria, a hallmark of paraptosis. To visualize these organelles, we utilized YFP-ER [ 33 ], Sec61β-GFP [ 16 ], and YFP-Mito cells [ 33 ], which exhibit fluorescence in the ER lumen, ER membrane, and mitochondrial matrix, respectively, along with MitoTracker-Red (MTR) staining. We found that VCP siRNAs (Fig. 1g ), mCherry-fused VCP QQ mutant (Fig. 1h ), and three VCP inhibitors (Fig. 1i ) commonly induced significant dilations of the ER and mitochondria. In particular, Eer1 induced the most dramatic dilation among the tested VCP inhibitors. Electron microscopy further revealed megamitochondria (giant mitochondria) and ER-derived vacuoles in Eer1-treated cells (Fig. 1j ). Time-lapse imaging in Eer1-treated YFP-ER and YFP-Mito cells confirmed ER or mitochondria swelling and fusion (Supplementary Fig. 2 ).
Subsequently, we investigated the involvement of apoptosis in the anticancer effect of VCP inhibition. Unlike the release of cytochrome c from mitochondria observed with the apoptosis-inducer tumor necrosis factor-related apoptosis-inducing ligand (TRAIL), Eer1 or CB-5083 treatment, VCP knockdown, or VCP QQ-EGFP expression resulted in cytochrome c accumulation within or at the periphery of dilated mitochondria (Fig. 2a ). Furthermore, caspase-3 and PARP cleavage, which was induced by TRAIL, were not notably observed by Eer1 or CB-5083 treatment (Fig. 2b ). While z-VAD-fmk (a pan-caspase inhibitor) effectively blocked TRAIL-induced cell death and apoptotic morphologies (Fig. 2c, d ), it did not affect vacuolation-associated cell death induced by VCP inhibitors (Fig. 2e, f ). Furthermore, inhibitors of necroptosis (necrostatin-1; Nec1), ferroptosis (ferrostatin-1; Ferro), and an early-phase autophagy inhibitor (3-methyladenine; 3-MA) did not attenuate the cytotoxicity of VCP inhibitors. In contrast, late-stage autophagy inhibitors (bafilomycin A; Bafilo and chloroquine; CQ) enhanced it (Fig. 2e, f ). In contrast, cycloheximide (CHX), known to block paraptosis [ 10 ], effectively inhibited mitochondria- and ER-derived vacuolation and cell death induced by the VCP inhibitors (Fig. 2g–i ). Together, these results suggest that VCP inhibition predominantly induces paraptosis as a cell death mechanism in cancer cells.
VCP inhibition triggers paraptosis in various breast cancer cell lines and in vivo xenograft mouse models, sparing non-transformed cells
We further examined the impact of VCP inhibition on other breast cancer cell lines. Treatment with Eer1 or CB-5083 induced cell death accompanied by vacuolation in several breast cancer cells, including BT549, MDA-MB 231, Hs578T, MDA-MB468, and T47D cells (Fig. 3a, b ). However, both Eer1 and CB-5083 displayed considerably lower cytotoxicity towards MCF10A cells, a non-tumorigenic breast epithelial cell line, without affecting their morphology (Fig. 3a, b ). Immunocytochemistry of calnexin (CNX), Bap31 (an ER marker protein), and Tim23 (a mitochondrial marker protein) confirmed ER- and mitochondria-derived vacuolation in Eer1- or CB-5083-treated cancer cells (Fig. 3b, c ).
To assess the in vivo effect of VCP inhibition, nude mice xenografted with MDA-MB 435 S cells were orally treated with saline or CB-5083. CB-5083 dose-dependently reduced tumor volume and weight without causing weight loss in mice (Fig. 3d–g ). Hematoxylin and eosin (H&E) staining revealed severe vacuolation in the tumor tissues of CB-5083-treated mice (Fig. 3h ). These results indicate that targeting VCP induces paraptosis both in vitro and in vivo and selectively affects cancer cells over non-transformed cells.
Oncogene-driven Akt activation sensitizes non-transformed cells to VCP inhibition
We explored whether the preferential sensitivity of cancer cells to VCP inhibition is linked to oncogenic activation. To investigate this, we introduced oncogenes such as KRas G12V and HRas G12V into non-transformed cells and examined their response to Eer1 treatment. HRas G12V -expressing cells (HRas G12V /MCF10A) showed significantly greater sensitivity to Eer1-induced cytotoxicity than KRas G12V -expressing cells (KRas G12V /MCF10A) or Mock/MCF10A cells (Fig. 4a ). Interestingly, Eer1 treatment induced cell death accompanied by ER and mitochondrial dilations only in HRas G12V -expressing cells (Fig. 4a–c ). CB-5083 treatment also induced a similar dilation of the ER and mitochondria in HRas G12V -expressing cells but not in Mock/MCF10A cells (Fig. 4c ). Next, we investigated the downstream signaling pathways, including RAF/MEK/ERK and phosphatidylinositol-3-kinase (PI3K)/Akt/mTOR, which are associated with mutant Ras [ 34 ]. ERK activation was observed in cells expressing either HRas G12V or KRas G12V and further enhanced by Eer1 treatment (Fig. 4d ). However, Akt activity was markedly increased only in HRas G12V -expressing cells and further enhanced by Eer1. Inhibition of PI3K/Akt using LY294002 (a PI3K/Akt inhibitor) or MK-2206 (an Akt inhibitor), but not inhibition of MEK (using PD98059 or U0126), blocked Eer1-induced paraptosis in HRas G12V /MCF10A (Fig. 4e, f ). These results suggest the critical role of the PI3K/Akt pathway in sensitizing non-transformed cells to VCP inhibition. Next, we further examined the involvement of mTOR complex 1 (mTORC1) and mTOR complex 2 (mTORC2) in VCP inhibition-induced paraptosis. Inhibitors targeting both mTORC1 and mTORC2, such as PP242 and Torin1 (mTORC1/2 inhibitors), but not the mTORC1-specific inhibitor rapamycin, effectively inhibited Eer1-induced paraptosis in HRas G12V /MCF10A cells (Fig. 4e, f ). Similar results were obtained in HRas G12V /MCF10A undergoing CB-5083-induced paraptosis (Fig. 4f ). This was further supported by the reduction in S6 and eIF4E-binding protein 1 (4E-BP1) phosphorylation, indicative of mTORC1 activity [ 35 ], but the increase in Akt phosphorylation at S473, indicative of mTORC2 activity [ 35 ], in Eer1-treated HRas G12V /MCF10A cells (Fig. 4d ). Collectively, these findings suggest differential roles for mTORC1 or mTORC2 in VCP inhibition-mediated paraptosis.
Activation of mTORC2/Akt contributes to VCP inhibition-mediated paraptosis
We investigated whether mTOR signaling plays a similar role in cancer cells undergoing VCP inhibition-induced paraptosis. In MDA-MB 435 cells, Eer1 or CB-5083 treatment led to progressive Akt phosphorylation while reducing 4E-BP1 phosphorylation (Fig. 5a ), indicating mTORC2 activation and mTORC1 inhibition. Knockdown of Rictor (a component of mTORC2) but not that of Raptor (a component of mTORC1) significantly attenuated Eer1- or CB-5083-mediated cytotoxicity and vacuolation (Fig. 5b, c ). Conversely, overexpression of mTOR and constitutively active Akt (myristoylated Akt (Myr-Akt)) potentiated Eer1- or CB-5083-induced cytotoxicity and vacuolation (Fig. 5d, e ). Pretreatment of cells with PP242 or LY294002 effectively inhibited Eer1- or CB-5083-induced paraptosis (Fig. 5f–h ). These results suggest that hyperactive mTORC2/Akt signaling contributes significantly to VCP inhibition-mediated paraptosis.
The ATF4/DDIT4 axis plays a crucial role in Akt activation and subsequent paraptosis upon VCP inhibition
Proteotoxic stress caused by proteostatic disruption triggers the integrated stress response (ISR) [ 36 ], converging on the phosphorylation of eukaryotic initiation factor 2α (eIF2α). This reduces cap-dependent translation while promoting the translation of specific mRNAs, including activating transcription factor 4 (ATF4) [ 37 ]. In our study, VCP inhibitors, VCP knockdown, and VCP QQ mutant expression consistently upregulated poly-ubiquitinated proteins and the components of ISR, including phosphorylated eIF2α (p-eIF2α), ATF4, and C/EBP homologous protein (CHOP), in MDA-MB 435 S cells (Fig. 6a ). Among these, ATF4 was found to be crucially associated with VCP inhibition-mediated paraptosis. ATF4 knockdown effectively inhibited paraptosis induced by Eer1, CB-5083, NMS-873 (Fig. 6b–e ), VCP knockdown (Supplementary Fig. 3a–c ), or VCP QQ mutant expression (Supplementary Fig. 3d–f ). Knockdown of ATF4 but not CHOP effectively inhibited Eer1-induced paraptosis (Fig. 6b–e ). These results underscore the role of ATF4 in VCP inhibition-mediated paraptosis. To further explore the role of ATF4 in this process, we performed the transcriptomic analysis in MDA-MB 435 S cells transfected with siNC (non-targeted siRNA) or siATF4 and in the absence or presence of Eer1 (Supplementary Fig. 4a, b ). We identified genes that were responsive to Eer1 (fold change of siNC+Eer1/siNC-Eer1 > 2) and highly dependent on ATF4 (fold change of siATF4+Eer1/siNC+Eer1 < -2) (Supplementary Fig. 4c ). Among these ATF4 downstream targets, we further investigated the role of DNA-damage-inducible transcript 4 (DDIT4), which is known to be associated with mTORC1 inhibition and mTORC2/Akt activation [ 38 , 39 ]. Our findings revealed that Eer1 upregulated DDIT4, along with ATF4 upregulation and Akt activation (Fig. 6f ). ATF4 knockdown inhibited Eer1-induced DDIT4 upregulation at the mRNA and protein levels, as well as Akt activation (Fig. 6g, h ). Additionally, DDIT4 knockdown effectively inhibited Eer1-induced Akt activation (Fig. 6i ). Furthermore, DDIT4 knockdown significantly blocked Eer1-induced cell death and vacuolation (Fig. 6j, k ). These results suggest that the ATF4/DDIT4 axis, particularly DDIT4, may mediate Akt activation in VCP inhibition-mediated paraptosis (Fig. 6l ).
mTORC2/Akt-mediated translational recovery contributes to cancer-selective cytotoxicity of VCP inhibition
Our investigation into the differential vulnerability of cancer and non-transformed cells to VCP inhibition revealed distinct responses in the ISR between the two cell types. While Eer1 induced robust and sustained poly-ubiquitinated protein accumulation and p-eIF2α phosphorylation in MDA-MB 435 S cells, these responses were delayed and weaker in MCF10A cells, lacking ATF4 upregulation (Fig. 7a ). The protein synthesis, as assessed by the SUnSET assay [ 40 ], was strongly suppressed in Eer1-treated MCF10A cells but showed initial reduction followed by recovery in MDA-MB 435 S cells, concurrent with ATF4 upregulation (Fig. 7a ). Inhibition of translation with CHX at 4 h post-Eer1 treatment effectively blocked cell death and vacuolation (Fig. 7b–d ), emphasizing the importance of translational recovery. This translational recovery and ATF4/CHOP upregulation were also observed in HRas G12V /MCF10A cells but not in Mock/MCF10A cells (Fig. 7e ), correlating with their sensitivity to VCP inhibition-induced paraptosis. These results suggest that effective translational suppression may prevent the death of non-transformed cells by alleviating VCP inhibition-mediated proteotoxic stress. However, in cancer cells with hyperactive Akt (possibly driven by oncogenic signals such as HRas G12V ), the translational recovery under VCP inhibition-mediated proteotoxic stress may enhance proteotoxicity by increasing the accumulation of misfolded proteins in the ER and mitochondria, leading to paraptosis. Next, we investigated whether the ATF4/DDIT4 axis and mTORC2/Akt signals are linked to translational dysregulation in VCP inhibition-mediated paraptosis. Either ATF4 or DDIT4 knockdown inhibited Eer1-induced translational recovery without affecting eIF2α phosphorylation (Fig. 7f, g ), suggesting that the ATF4/DDIT4 axis may contribute to VCP inhibition-mediated paraptosis by positively affecting Akt activation and translational recovery. In addition, knockdown of Rictor but not Raptor potently inhibited Eer1-induced Akt activation, translational recovery, and ATF4/CHOP upregulation (Fig. 7h ). Similar results were obtained by PP242 or LY294002 pretreatment (Fig. 7i ). These results indicate the importance of the ATF4/DDIT4 axis and mTORC2/Akt signal in VCP inhibition-mediated paraptosis. Interestingly, mTORC2/Akt inhibition suppressed Eer1-induced ATF4 upregulation (Fig. 7h, i ), and the knockdown of ATF4 or DDIT4 inhibited Eer1-induced Akt activation (Fig. 6n, o ). Cross-regulation between the ATF4/DDIT4 axis and mTORC2/Akt signaling upon VCP inhibition suggests their cooperative role in translational recovery and proteotoxic stress enhancement.
eIF3d may critically contribute to translational recovery in VCP inhibition-mediated paraptosis
The mechanism underlying translation recovery in VCP inhibition-mediated paraptosis was further explored. It is known that under stress conditions, mTORC1-dependent cap-dependent mRNA translation is suppressed [ 41 ]. However, alternative mechanisms have been proposed to allow protein synthesis to adapt to various stressors. These mechanisms include eukaryotic translation initiation factor 3 subunit D (eIF3d), a subunit of the eIF3 complex with cap-binding activity) [ 42 – 45 ], as well as the m 6 A-pathway, which involves methyltransferase-like 3 (METTL3) [ 46 ], ATP Binding Cassette Subfamily F Member 1 (ABCF1) [ 47 ], and YTH N6-Methyladenosine RNA Binding Protein F1 (YTHDF1)) [ 48 ]. Remarkably, our findings revealed that eIF3d knockdown had a significant impact on Eer1-induced paraptosis (Fig. 8a–d ), effectively inhibiting translational recovery (Fig. 8e ). Intriguingly, this effect was not observed with knockdown of eIF4E, METTL3, ABCF1, or YTHDF1 (Fig. 8a–d ). Furthermore, eIF3d knockdown resulted in the enhancement of eIF2α phosphorylation (Fig. 8e ). Notably, eIF3d knockdown also suppressed the upregulation of ATF4 at the protein level without downregulating ATF4 mRNA levels (Fig. 8e, f ). These findings underscore the pivotal role of eIF3d in facilitating translational recovery during VCP inhibition-induced paraptosis.
In summary, the selective cytotoxicity of VCP inhibition towards cancer cells can be attributed to the disruption of proteotoxic stress mitigation pathways involving the ATF4/DDIT4 axis and hyperactive mTORC2/Akt signaling. This process is further modulated by eIF3d-mediated translational recovery, which enhances proteotoxicity selectively in cancer cells undergoing paraptosis upon VCP inhibition (Fig. 8g ). | Discussion
Identifying cancer-selective targets and understanding their underlying mechanisms are pivotal in developing effective cancer therapies. Among these potential targets, VCP has emerged as both a prognostic biomarker and a prospective therapeutic target in cancer [ 8 , 9 ]. Our study introduces a novel perspective by highlighting VCP’s central role as a molecular target in paraptosis, a distinctive form of programmed cell death. Importantly, our findings demonstrate that inhibiting VCP leads to preferential cell death in breast cancer cells compared to non-transformed cells, primarily through the induction of paraptosis. Genetic and pharmacological intervention of VCP commonly elicits the morphological features of paraptosis, reduction in cell viability, and ISR, demonstrating the crucial role of ATF4 in paraptosis. These results suggest that the gene-level intervention of VCP has the same mechanism of regulating paraptosis as that of VCP inhibitors.
The impairment of VCP-mediated ERAD and MAD processes appears central to this phenomenon. Our experiments revealed that Eer1, a VCP inhibitor, led to increased protein levels of the ERAD substrates (e.g., nuclear respiratory factor 1 (Nrf1) [ 49 ] and receptor accessory protein 5 (REEP5)) [ 50 ] and the MAD substrates (e.g., myeloid leukemia 1 (Mcl-1) [ 51 ] and mitofusin 2 (Mfn2) [ 52 ]) (see Supplementary Fig. 5 ). Inhibition of VCP may result in the progressive accumulation of misfolded proteins within the ER and mitochondria, leading to osmotic pressure changes and subsequent organelle swelling [ 53 ]. The fusion of the ER compartments induced by VCP inhibition may disrupt protein synthesis, folding, and transport, further exacerbating proteotoxic stress. Additionally, mitochondrial swelling and fusion at the early phase may act as an adaptive response to maintain mitochondrial membrane integrity [ 17 ]. However, excessive megamitochondrial expansion can compromise membrane potential, deplete cellular energy, and ultimately drive paraptotic cell death [ 17 , 19 ]. Targeting these two organelles during paraptosis represents a unique and promising therapeutic strategy against solid tumors [ 19 ].
Proteasomal and VCP inhibitors both induce proteostatic stress [ 8 , 54 ]. While proteasome inhibitors (PIs) have shown clinical utility in hematological malignancies by inducing apoptosis [ 55 ], their effectiveness against solid tumors has been limited [ 54 , 56 ]. In contrast, various VCP inhibitors have demonstrated potent anti-tumor activities across various hematologic and solid tumor models [ 57 ]. The proteasome may not efficiently process ubiquitinated substrates, including those associated with ERAD, MAD, and chromatin-associated degradation, without the assistance of VCP [ 58 – 61 ]. Therefore, the preferential targeting of solid tumors by VCP inhibitors, compared to PIs, may be attributed to the broader defects in the UPS than those by PIs [ 57 ]. Additionally, compared to PIs, VCP inhibitors impact multiple cellular processes, including autophagy [ 62 ], endosomal trafficking [ 62 , 63 ], DNA repair and genome stability [ 64 ], membrane fusion [ 65 ], non-proteolytic disassembly of protein phosphatase-1 complex [ 66 , 67 ], and regulation of PD-L1 expression [ 68 ], possibly contributing to their efficacy in solid tumor models [ 31 ]. Among the developed VCP inhibitors, ATPase competitive inhibitors, CB-5083 and CB-5339, have reached clinical trials ( https://clinicaltrials.gov trial number NCT02243917 & NCT04372641) by demonstrating effective anti-tumor activity across various tumor models [ 31 , 69 , 70 ]. Understanding the resistance mechanisms is crucial for developing more effective inhibitors or combination therapies. Resistance to VCP inhibitors, primarily attributed to specific mutations in the D2 ring ATPase domain and the linker region connecting the D1 and the D2 domains of VCP, presents a clinical challenge [ 57 ].
Recent findings from our laboratory have revealed distinct responses to PI treatment in different cell types [ 27 ]. Multiple myeloma (MM) cells were highly susceptible to bortezomib (Bz), inducing apoptosis, while breast cancer cells exhibited resistance. Interestingly, the application of ISRIB, a small molecule known to restore eIF2B-mediated translation during the integrated stress response, protected MM cells from apoptosis while enhancing Bz-mediated cytotoxicity in breast cancer cells by inducing paraptosis. These results suggest that enhancing translation and inducing paraptosis may effectively overcome PI resistance in solid tumor cells.
The present study further underscores that the difference in proteotoxic stress responses between cancer and normal cells could be exploited for therapeutic purposes. Sustained translation attenuation under VCP inhibition can alleviate proteotoxic stress and support the survival of non-transformed cells. However, translation recovery following initial suppression in cancer cells enhances proteotoxic stress, ultimately leading to paraptotic cell death.
The PI3K/Akt/mTOR signaling cascade is hyperactivated in many solid tumors, including breast cancer, contributing to cancer progression and resistance to pro-apoptotic therapies [ 71 , 72 ]. However, targeting this pathway has shown limited efficacy due to feedback regulation and interference with other signaling pathways [ 73 ]. Our study reveals that VCP inhibition leads to selective mTORC2 activation and mTORC1 inhibition in cancer cells. In contrast, non-transformed cells exhibit mTORC1 activation without mTORC2 induction upon VCP inhibition. Additionally, our findings demonstrated that mTORC2 activation is essential for the selective action of VCP inhibitors in cancer cells. Inhibition of mTORC2/Akt signals effectively attenuates translational recovery and paraptosis induced by VCP inhibition. Cancer cells with hyperactive mTORC2/Akt signaling are more vulnerable to VCP inhibition, making VCP an attractive target in this context.
The ATF4/DDIT4/mTORC2/Akt signals are known to be required for cell survival under energy-related stresses, such as amino acid deprivation [ 39 ]. In our study, the ATF4/DDIT4 axis contributed to Akt activation, translational recovery, and paraptosis upon VCP inhibition. Additionally, eIF3d critically contributed to translational recovery, leading to ATF4 upregulation and enhancing cancer cells’ sensitivity. Therefore, we speculate that in response to proteotoxic stress, such as VCP inhibition, the ATF4/DDIT4/mTORC2/Akt signals and eIF3d may shift the cell fate towards paraptotic cell death.
In conclusion, our study unveils the potential of VCP as a therapeutic target in cancer, emphasizing the selective vulnerability of cancer cells to VCP inhibition-induced paraptosis. This strategy holds promise for overcoming resistance to pro-apoptotic therapies in solid tumors driven by oncogenic PI3K/Akt/mTOR signaling. | Valosin-containing protein (VCP)/p97, an AAA+ ATPase critical for maintaining proteostasis, emerges as a promising target for cancer therapy. This study reveals that targeting VCP selectively eliminates breast cancer cells while sparing non-transformed cells by inducing paraptosis, a non-apoptotic cell death mechanism characterized by endoplasmic reticulum and mitochondria dilation. Intriguingly, oncogenic HRas sensitizes non-transformed cells to VCP inhibition-mediated paraptosis. The susceptibility of cancer cells to VCP inhibition is attributed to the non-attenuation and recovery of protein synthesis under proteotoxic stress. Mechanistically, mTORC2/Akt activation and eIF3d-dependent translation contribute to translational rebound and amplification of proteotoxic stress. Furthermore, the ATF4/DDIT4 axis augments VCP inhibition-mediated paraptosis by activating Akt. Given that hyperactive Akt counteracts chemotherapeutic-induced apoptosis, VCP inhibition presents a promising therapeutic avenue to exploit Akt-associated vulnerabilities in cancer cells by triggering paraptosis while safeguarding normal cells.
Subject terms | Supplementary information
| Supplementary information
The online version contains supplementary material available at 10.1038/s41419-024-06434-x.
Author contributions
DML, IYK, HJL, MJS, HIL, MYC, JHJ, YHC, SSP, and MY performed experiments and analyse the data. GY, SYJ, EKC, and COY provided technical and material support. DML, EK, and KSC performed study design and writing. All the authors read and approved the manuscript.
Funding
This work was supported by the National Research Foundation of Korea (NRF) grants funded by the Korean government (MSIP), Mid-career Research Program (NRF-2023R1A2C2006580) & 2020R1A6A1A03043539)).
Data availability
All data and information concerning this study will be provided upon request.
Competing interests
The authors declare no competing interest.
Ethics Declaration
The institutional animal care and use committee of the Asan Institute for Life Science approved animal protocols. | CC BY | no | 2024-01-15 23:41:59 | Cell Death Dis. 2024 Jan 13; 15(1):48 | oa_package/9b/cc/PMC10787777.tar.gz |
|
PMC10787778 | 38218979 | Introduction
Despite an increased understanding of the physiological processes involved in tumor metastasis, there are limited therapies that have proven clinical efficacy in advanced metastatic cancers such as glioblastoma, ovarian, prostate, pancreatic, and triple-negative breast cancer. The critical role of the TME as both a stimulator and a suppressor of tumor progression and metastasis is now widely recognized 1 , 2 . A potential TME-targeted therapy has been proposed where metastasis-incompetent tumors generate metastasis-suppressive microenvironments in distant organs by inducing TSP-1 expression in the bone marrow-derived Gr1+ myeloid cells 3 , 4 .
A potent inhibitor of tumor metastasis, Prosaposin (Psap), acts via stimulation of p53 and the anti-tumorigenic TSP-1 in bone marrow-derived cells that are recruited to metastatic sites 5 , 6 . Within the TME, TSP-1 has been shown to act on two key receptors CD36 and CD47 (4,5). As a mediator of the pro-apoptotic activity of TSP-1, CD36 has been shown to be expressed on greater than 97% of human serous ovarian tumors tested (7). CD36 was also found to be increased in metastatic tumors compared to primary tumors, which are 2-3-fold higher than levels in ovarian and fallopian tube tissue 7 . CD36 has also been shown to be expressed in multiple human cancer cell lines including those derived from pancreatic, ovarian, breast, and prostate cancer 7 , 8 . Another TSP-1 receptor, CD47 is expressed in various types of cancer and has been shown to inhibit the direct killing of cancer cells 9 by binding to SIRPα on the cell surface of macrophages which represents a “do-not eat-me” signal to prevent phagocytosis by the macrophage 10 . CD47 also stimulates tumor-initiating cells, sometimes called cancer stem cells, to differentiate into mature cells 9 . High levels of either CD36 or CD47 are both prognostic indicators of poor outcomes for cancer patients 11 , 12 . Taken together, these findings suggest that a drug that stimulates expression of TSP-1 in the TME may have multiple beneficial effects as an anti-cancer agent.
To identify potential anti-cancer agents a proprietary TME screening platform was utilized to evaluate metastatic vs localized tumors and refractory vs responsive tumors. Based on these findings, VT1021 was developed with drug-like properties derived from the active sequence in Psap 4 . VT1021 exhibited TSP-1-inducing activity and significantly regressed tumors in a PDX model of metastatic ovarian cancer 7 . The in vivo activity of VT1021 in murine xenograft models with several human solid tumor indications is presented in this report. This first-in-human phase 1 study was designed to determine RP2D, investigate the safety, pharmacokinetics (PK), and efficacy as well as confirm the mechanism of action of the novel, first-in-class, dual inhibitor of CD36 and CD47, VT1021, in patients with advanced solid tumors. Here, we select RP2D for VT1021, demonstrate that it is safe and well tolerated at all dosing levels, achieving a disease control rate (DCR) of 42.9% with clear validation of the proposed mechanism of action via the stimulation of TSP-1 in the TME. | Methods
Clinical study design
NCT03364400 was a phase 1, first-in-human, multicenter, open-label, dose escalation, and expansion study of VT1021 designed and sponsored by Vigeo Therapeutics, Inc. Data for the dose escalation phase are reported here. The data cut-off date was December 8, 2021. For the dose escalation portion of the study, the first patient was enrolled on November 28, 2017, and the last patient was enrolled on January 27, 2020.
The primary objective of the escalation phase was to determine the RP2D for VT1021. The secondary objectives were to characterize the adverse event (AE) profile, determine the PK, and describe preliminary evidence of efficacy, if feasible, by using objective response rate (ORR), disease control rate (DCR), and progression-free survival (PFS) based on Response Evaluation Criteria in Solid Tumors (RECIST) v1.1. Exploratory objectives included pharmacodynamic (PD) assessment of expression levels of CD36, CD47, TSP-1 and selected immune cells by immunohistochemistry (IHC) on pairs of pre- and on-study biopsies.
Eligible patients had advanced solid tumors that were refractory to, or intolerant of, existing therapies known to provide clinical benefit for their condition. Patients were aged ≥18 years and had Eastern Cooperative Oncology Group (ECOG) performance status of ≤2. Patients had evaluable or measurable disease by RECIST v1.1. Patients had to have adequate marrow reserve, liver and renal function. Key exclusion criteria included diagnosis of another malignancy within the past 2 years, history of a major surgical procedure or a significant traumatic injury within 14 days prior to commencing study drug, treatment with investigational therapy(ies) within 5 half-lives of the investigational therapy prior to the first scheduled day of dosing with VT1021, evidence of symptomatic brain metastases and use of other concurrent chemotherapy, immunotherapy, radiotherapy, or investigational anti-cancer therapy. Full eligibility criteria are available in the Protocol (Supplementary Information).
Dose escalation was a variation to the traditional 3 + 3 study design. The dose escalation consisted of the administration of VT1021 intravenously twice weekly at doses of 0.5, 1.0, 2.0, 3.3, 5.1, 6.6, 8.8, 11.8 or 15.6 mg/kg (Fig. 1 ). The starting dose was 1 mg/kg, established based on pre-clinical toxicity studies. Safety was evaluated using CTCAE version 5.0. Each dose level would enroll at least one patient. If no dose-limiting toxicity (DLT) was observed, the next patient would be enrolled at the next higher dose level. If one DLT was observed, a minimum of 3 patients must be treated at the same dose level. Dose escalation was to continue until at least 2 patients in a cohort of 6 experienced DLT. Patients received VT1021 by intravenous infusion twice weekly IV on a 28-day cycle. The extent of disease was evaluated by imaging studies at the end of Cycle 2 and after every 2 cycles thereafter. Treatment would continue until disease progression, unacceptable toxicity or another withdrawal criteria was met. Intra-patient dose escalation was permitted upon meeting pre-specified criteria. RP2D was defined as the dose level where ≤33% patients experienced DLT. DLTs (defined in the study protocol, Supplementary Information) were assessed during the first 28 days of treatment.
To decrease the risk of infusion reactions during the first week of dosing a premedication regimen was implemented. Prior to receiving each infusion of VT1021 patients were required to receive premedication with either dexamethasone by mouth 6–12 h pre-infusion or methylprednisolone 0.5 to 2 h prior to start of infusion and antihistamine (H1 antagonist), acetaminophen, and H2 blockers at the discretion of the investigator. In lieu of the premedication regimen clinical investigators were allowed to administer premedication regimes as per institutional guidelines. The premedication corticosteroid dose was to be decreased, tapered, or eliminated at the Investigator’s discretion after the first week of dosing.
Patient assessment and follow-up procedures can be found in the schedule of assessments in the study protocol Appendix 1 (Supplementary Information). Per protocol, regular safety assessments were performed in a population of patients who have received at least one dose of VT1021, including but not limited to physical examinations, ECOG/Karnofsky PS, electrocardiograms, and laboratory parameters. Clinical response was evaluated by using RECIST v1.1 and iRECIST in a population of patients who have at least completed one cycle of VT1021 treatment, per protocol. Blood samples were collected for PK analysis on Days 1, 4, 8, 11, 15, 18, 22, 25 and 50 at pre-dose, and at 0, 2-, 4-, 6- and 24-h post-infusion (Fig. 2 ). Plasma concentrations were determined with a validated assay using liquid chromatography- mass spectrometry.
For patients who have signed a consent form, a pre-study biopsy or archival tumor specimen obtained within 6 months prior to study initiation was collected. In addition, on-study biopsies were collected at the end of Cycle 1 Week 4 or at any time during Cycle 2. Biopsies could be obtained after cycle 2 at the discretion of the Investigator. Paired pre-study and on-study biopsies were analyzed for expression of CD36, CD47, TSP-1, and immune cell populations by both IHC (Figs. 3 – 5 ) and MIBI (Multiplexed Ion Beam Imaging). MIBI is performed by staining formalin-fixed paraffin-embedded (FFPE) tissue with a panel of metal-labeled antibodies and then imaging the tissue using time-of-flight secondary ion mass spectrometry (ToF-SIMS) 13 . The masses of detected species are then assigned to target biomolecules given the unique metal isotope label of each antibody, creating multiplexed images. All antibodies in the panel have been MIBI validated on human FFPE tissue.
All relevant ethical regulations were followed during the study. The methods were performed in accordance with relevant guidelines and regulations and approved by the Food and Drug Administration (FDA). Written informed consent was obtained from all patients who participated in the study. The Institutional Review Boards (IRBs) in all participating institutions have approved the study protocol. The institutions participated in the study are Northwestern University Medical School, Chicago, IL, Horizon Oncology Center, Lafayette, IN, South Texas Accelerated Research Therapeutics, San Antonio, TX, and Beth Israel Deaconess Hospital, Boston, MA.
We used the CONSORT checklist when writing our report 14 .
Inclusion criteria
To qualify for enrollment, all the following criteria must be met: (1) Patient must provide written informed consent. (2) Patient is ≥18 years of age. (3) For the Dose Escalation Phase: Patients with advanced solid tumors that are refractory to, or intolerant of, existing therapies known to provide clinical benefit for their condition. (4) Patient has evaluable or measurable disease by RECIST v1.1. (5) Patient has a performance status (PS) of 0–1 on the Eastern Cooperative Oncology Group (ECOG) scale. (6) Patient is at least 21 days removed from therapeutic radiation or chemotherapy prior to the first scheduled day of dosing with VT1021 and has recovered to Grade ≤ 1 (National Cancer Institute [NCI] Common Terminology Criteria for Adverse Events [CTCAE] v5.0) from all clinically significant toxicities related to prior therapies. (a) For patients receiving nitrosoureas or mitomycin C, the window is 6 weeks. (b) For patients receiving monoclonal antibody therapy, the window is at least one half-life or 4 weeks (whichever is shorter). (7) Patient has adequate organ function defined as: (a) Absolute neutrophil count (ANC) ≥ 1.5 × 10 9 /L (1500/μL) and absolute lymphocyte count (ALC) ≥ 7 × 10 9 /L (700/μL). (b) Platelet ≥100 × 10 9 /L. (c) Hemoglobin ≥9 g/dL. (d) Activated partial thromboplastin time/ prothrombin time/international normalized ratio (aPTT/PT/INR) ≤ 1.5 × upper limit of normal (ULN) unless the patient is on anticoagulants in which case therapeutically acceptable values (as determined by the investigator) meet eligibility requirements. (e) Aspartate aminotransferase (AST) or alanine aminotransferase (ALT) ≤ 2.5 × ULN. In the case of known (i.e., radiological or biopsy documented) liver metastasis, serum transaminase levels must be ≤5 × ULN. (f) Total serum bilirubin ≤1.5 × ULN (except for patients with known Gilbert’s Syndrome ≤3 × ULN is permitted). (g) Renal: Serum creatinine <2.0 × ULN and creatinine clearance ≥50 L/min/1.73 m 2 . (h) Serum albumin >3 gm/dL. (8) Patient agrees to use acceptable methods of contraception during the study and for at least 90 days after the last dose of VT1021 if sexually active and able to bear or beget children.
Exclusion criteria
The presence of any of the following will exclude the patient from the study: (1) Diagnosis of another malignancy within the past 2 years (excluding a history of carcinoma in situ of the cervix, superficial non-melanoma skin cancer, or superficial bladder cancer that has been adequately treated, or stage 1 prostate cancer that does not require treatment or requires only treatment with luteinizing hormone-releasing hormone agonists or antagonists if initiated at least 90 days prior to the first dose of VT1021). (2) History of a major surgical procedure or a significant traumatic injury within 14 days prior to commencing study drug, or the anticipation of the need for a major surgical procedure during the course of the study. (3) Treatment with investigational therapy(ies) within 5 half-lives of the investigational therapy prior to the first scheduled day of dosing with VT1021, or 4 weeks if the half-life of the investigational agent is not known, whichever is shorter. (4) Concurrent serious (as determined by the Principal Investigator [PI]) medical conditions, including, but not limited to, New York Heart Association (NYHA) class III or IV congestive heart failure, history of congenital prolonged QT syndrome, uncontrolled infection, active hepatitis B, hepatitis C or human immunodeficiency virus (HIV), or other significant co-morbid conditions that, in the opinion of the Investigator, would impair study participation or cooperation. (5) Pregnant or planning to become pregnant or breast feed while on study. (6) Evidence of symptomatic brain metastases. Patients with treated (surgically excised or irradiated) and stable brain metastases are eligible, assuming the patient has adequately recovered from treatment, the treatment was at least 28 days prior to initiation of study drug, and baseline brain computed tomography (CT) with contrast or magnetic resonance imaging (MRI) within 14 days of initiation of study drug, is negative for new or worsening brain metastases. (7) Other concurrent chemotherapy, immunotherapy, radiotherapy, or investigational anti-cancer therapy. (8) Requirement for palliative radiotherapy to lesions that are defined as target lesions by RECIST/RANO criteria at the time-of study entry. (9) Known hypersensitivity to any of the components of VT1021 (sodium phosphate dibasic anhydrous, sodium phosphate monobasic monohydrate, mannitol, polysorbate 80) or a severe reaction to PS20- or PS80-containing drugs or investigational agents (e.g. amiodarone, Vitamin K, etoposide, docetaxel, cancer vaccine, protein biotherapeutics [like monoclonal antibodies], erythropoietin-stimulating agents, fosaprepitant). (10) Chronic, systemically administered glucocorticoids in doses equivalent to >5 mg prednisone daily. Topical, inhalational, ophthalmic, intraarticular, and intranasal glucocorticoids are permitted. Isolated or intermittent use of systemically administered glucocorticoids to treat complications of malignancy, use as a premedication, or as a onetime prep for an imaging procedure is permitted. If patient was on >5 mg prednisone/day equivalent, last dose must have been at least 7 days prior to the first planned dose of study drug. (11) Patients with active hepatitis B (e.g., hepatitis B surface antigen [HBsAg] reactive) are excluded, however, patients with past hepatitis B virus (HBV) infection or resolved HBV infection (defined as the presence of hepatitis B core antibody [HBcAb] and absence of HBsAg) may be enrolled provided that prior testing/known status for HBV deoxyribonucleic acid (DNA) is negative. Patients with active hepatitis C (e.g., hepatitis C virus [HCV] ribonucleic acid [RNA] [qualitative] are detected) are excluded, however, patients with cured hepatitis C (negative HCV RNA prior test/known status) may be enrolled.
Statistical analysis
The disease control rate (DCR) used for clinical outcomes was calculated as the percentage of patients with advanced cancer whose therapeutic intervention has led to a complete response, partial response, or stable disease. The 90% Confidence Interval (CI) was calculated using the exact (Clopper-Pearson) interval.
Statistical analysis was performed with Graphpad Prism 9.3.1, p values were calculated by unpaired two-sample t -test, graphs of a point with error bars are used to indicate the average values and standard error of the mean (SEM).
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. | Results
The RP2D for VT1021 was determined to be 11.8 mg/kg, twice weekly dosing. The consideration was based on the combination of assessment of safety, tolerability, as well as PK exposure across all dose levels. The individual assessments of these parameters are described in the following sections.
Clinical patient population, treatment, and disposition
Thirty-eight patients, who received at least 1 dose of VT1021, were enrolled (Fig. 1 ). Patient demographics and disease characteristics by dose cohort are shown in Table 1 . The median age was 65 years (range 40–84) and 100% of patients had an ECOG performance status of ≤2. A variety of tumor types enrolled most commonly being ovarian cancer (8 patients, 21%) and pancreatic cancer (7 patients, 19%); other tumor types had three or fewer patients (Table 1 ).
The patient population was heavily pre-treated; all patients received prior antineoplastic therapy with 78.9% of patients receiving ≥3 prior treatment regimens for advanced/metastatic disease, with 42% receiving prior radiotherapy.
Intra-patient dose escalation was permitted and a total of 4 patients were dose-escalated: one patient with colorectal cancer was dose-escalated from 0.5 to 1.0 mg/kg at cycle 8, one patient with uterine cancer was dose-escalated from 5.1 to 6.6 mg/kg at cycle 7, and two patients, one with thymoma and the other with pseudomyxoma peritonei, were dose-escalated from 8.8 to 11.8 mg/kg at cycles 7 and 8, respectively. One patient with ovarian cancer dose de-escalated from 6.6 to 5.1 mg/kg at cycle 2 due to concern of worsening baseline peripheral neuropathy from grade 1 to 2, which was later attributed to previous platinum chemotherapy.
To achieve the minimum of 3 evaluable patients dose levels 1.0 mg/kg, 5.1 mg/kg and 15.6 mg/kg were over-enrolled. Some of the patients withdrew voluntarily, others withdrew to go to hospice care, at least 1 patient withdrew due to an infusion reaction during the first dose.
The first patient dosed at the protocol-defined starting dose of 1 mg/kg experienced a grade 3 infusion reaction on the third dose. To ensure safety, 3 patients were treated at the lower dose level of 0.5 mg/kg and the protocol was amended to require premedication of steroids and/or antihistamines (at the PI’s discretion) prior to the first infusion. No infusion reaction was reported in 3 patients treated at 0.5 mg/kg and dose escalation resumed. Every patient received premedication per protocol and the majority of the patients were tapered off the premedication after 1–2 weeks of VT1021 treatment.
Of the patients that were discontinued from the study, reasons for discontinuation included disease progression (65.8%), patient or physician decision (13.2%), AEs (10.5%) and death (7.9%). None of the deaths were attributed to VT1021 treatment. Four patients died on treatment, and the cause of death were hepatic failure due to disease progression (at 5.1 mg/kg dose level), hepatic failure due to disease progression (at 6.6 mg/kg dose level), tumor hemorrhage (at 15.6 mg/kg dose level) and septic shock/unrelated to protocol (at 15.6 mg/kg dose level). Three patients died after treatment discontinuation within 30 days following the last dose of VT1021, and the cause of death were disease progression (at 3.3 mg/kg dose level), multi-organ failure (at 11.8 mg/kg dose level), and disease progression (at 15.6 mg/kg dose level).
Safety and tolerability
Overall, VT1021 had a clean safety profile. The incidence of ≥grade 3 AEs suspected to be related to the study drug was very low (7.9%). Thirty-seven patients (97.4%) experienced at least one treatment-emergent adverse event (TEAE) and 17 patients (44.7%) experienced grade ≥3 TEAEs shown in Table 2 (TEAEs in ≥5% of patients). A TEAE is defined as any event that occurs on or after the first dose of study drug administration or any pre-existing event which worsened in severity after dosing. There were 5 patients (13.2%) with fatal TEAEs, none of which were classified as drug-related. AEs suspected to be related to the study treatment (RTEAEs) were experienced by 18 patients (47.4%) shown in Table 3 (RTEAEs in ≥5% of patients); the most frequent RTEAEs (≥10% of patients) were fatigue (6 patients, 15.8%), nausea (4 patients, 10.5%) and infusion-related reaction (4 patients, 10.5%). Grade 3 RTEAEs were reported in 3 patients (7.9%) where there was a single occurrence each of infusion-related reaction, anemia, and increased aspartate aminotransferase (AST), blood bilirubin and creatinine. Study drug was held for grade 3 elevation in AST and blood bilirubin and the patient was discontinued from treatment for clinical progression of disease. Study drug was discontinued for the patient with grade 3 infusion reaction and not re-started for the patient with anemia and increased blood creatinine who subsequently experienced an SAE of sepsis.
DLTs, PK and RP2D
Throughout the course of the dose escalation trial no patient experienced a DLT and thus MTD was not achieved. Because no MTD was reached, the recommended phase 2 dose (RP2D) was determined based on the pharmacokinetic (PK) profile. Table 4 shows the PK parameters for VT1021 by dose cohort and Fig. 2 shows the median concentration-time profiles by dose. VT1021 plasma exposures increased dose proportionally from 0.5 to 8.8 mg/kg based on mean C max , AUC, and CL values and the exposures from 8.8 mg/kg to 15.6 mg/kg were similar. VT1021 did not appear to accumulate in plasma with repeated dosing, which is consistent with the dosing frequency and short terminal half-life values observed (average 1.2 to 1.3 h across all doses and sampling days).
Based on this data, the RP2D of 11.8 mg/kg twice weekly was selected based on the observation that PK exposure levels were similar from 8.8 to 15.6 mg/kg with no increased dose-related AEs or toxicities were observed.
Efficacy
While efficacy was not a primary readout for the dose escalation trial, VT1021 did demonstrate single-agent activity in multiple patients. Out of 38 patients who received at least one dose of VT1021 in the escalation phase, 28 patients were considered evaluable based on the criteria of completing at least one cycle of treatment with tumor imaging during cycle 2. One patient with metastatic thymoma (Stage 4) achieved confirmed partial response (PR) and remained on treatment for 504 days. Eleven patients had stable disease (SD) in 9 different solid tumor indications, resulting in a disease control rate (DCR) of 42.9% (Table 5 ).
To better understand the biological activity of VT1021, pre-study biopsies from 25 evaluable patients were analyzed by immunohistochemistry (IHC). Specifically, the expression levels of CD36 and CD47, the two major cell surface receptors for TSP-1, were assayed. The intensity of each marker was analyzed by Image J/Fiji. Biopsies were scored as being either low, medium or high (representative images are shown in Fig. 3 ). The scores were measured by both the percentage of cells with positive staining of the biomarkers, and by the level of intensity of staining signal. Moreover, patients were further classified as being dual high for both CD36 and CD47, not dual high, or unknown. The percent change in target lesion from baseline (based on the length of the long axis), correlation to dual high CD36 and CD47 status and duration of exposure to VT1021 for the evaluable patients are shown in Fig. 4 . Nine of 25 patients with available biopsies were scored as dual high for CD36 and CD47 (36%) (Fig. 4a ). Overall, for all evaluable patients the median duration on treatment was 53 days. Out of the 9 patients with dual high CD36 and CD47 expression, 8 achieved SD (89%) with a mean treatment duration of 148 days (Fig. 4b ).
Biomarker analyses
Since the mechanism of action (MOA) of VT1021 is mediated by the induction of TSP-1 in MDSCs 4 , 7 , we sought to analyze the expression of TSP-1 in pre- and on-study biopsies. The rationale was that induction of TSP-1 expression would functionally reprogram the TME in patient tumors. Although not required by the study protocol, paired pre- and on-study tumor tissue samples were voluntarily obtained from 7 patients and were analyzed by IHC for expression of CD36, CD47 and TSP-1. The seven patients who provided paired biopsy samples were the following: pancreatic cancer at 5.1 mg/kg, prostate cancer at 5.1 mg/kg, uterine carcinosarcoma at 5.1 mg/kg, kidney cancer at 8.8 mg/kg, appendiceal carcinoma at 11.8 mg/kg, and two ovarian cancer at 15.6 mg/kg. VT1021 induced expression of TSP-1 in the TME in all on-study biopsies analyzed, with one representative image shown in Fig. 5 . Significantly, analysis of CD36 and CD47 expression revealed no change in pre- vs on-study biopsies (Fig. 5 ).
Additionally, we assessed the composition of the TME to determine whether VT1021 was able to reprogram the recruited immune and inflammatory cells. Tumor tissue samples from 4 patients were analyzed for quantitative and qualitative changes in MDSCs, T cells and macrophages after VT1021 treatment. The four patients whose paired biopsy samples were used for quantitative biomarker analysis were the following: uterine carcinosarcoma at 5.1 mg/kg, kidney cancer at 8.8 mg/kg, and two ovarian cancer at 15.6 mg/kg. Analysis of the 4 pairs of biopsies revealed that TSP-1 expression was induced in MDSCs in on-study biopsies compared to pre-study (Fig. 5 ). Moreover, 3 out of 4 on-study biopsies displayed increased CD8+ CTLs and iNOS+ M1 macrophages with concomitant decreases in FoxP3+ regulatory T cells and CD163 + M2 macrophages. Figure 5 depicts a representative example of changes observed in the TME in a patient with metastatic renal cell carcinoma (RCC) who achieved SD in the 8.8 mg/kg cohort and was on treatment for 105 days. Similar TSP-1 induction and macrophages repolarization results were also found in the MIBI study (Supplementary Information). | Discussion
We report here the first-in-human experience of VT1021 in patients with advanced solid tumors who were refractory to multiple lines and various classes of systemic therapies. The study rigorously assessed the safety, PK/PD, and clinical activities of this first-in-class agent, which targets the cell surface molecules CD36 and CD47 simultaneously, via induction of TSP-1 15 . Treatment with VT1021 in this population was safe and well tolerated. The major drug-related toxicity was infusion reaction noted at the protocol-defined starting dose which resolved following pre-medications with steroids and/or antihistamines. The incidence of ≥grade 3 AEs suspected to be related to the study drug was very low (7.9%). There were several patients that died on the study however none were attributed to the study drug, determined by the clinical principal investigators. The patient population in the escalation phase was all heavily pre-treated with more than four previous lines of therapy and multiple metastases. The PK parameters were characterized for each dose cohort and were observed to increase proportionally from 0.5 mg/kg to 8.8 mg/kg while the exposures from 8.8 mg/kg to 15.6 mg/kg were similar. Because the MTD was not reached, the RP2D of 11.8 mg/kg was selected based on PK exposure. The dosing schedule of 11.8 mg/kg twice weekly was further evaluated in tumor type-specific expansion cohorts, namely GBM, ovarian and pancreatic cancer which will be reported when survival data is fully mature. Out of 28 evaluable patients reported in this study, one patient achieved PR and 11 patients had SD with a DCR of 42.9%. Of the SD patients who provided biopsies, 72.7% had dual high expression of both CD36 and CD47. Biomarker analyses in tumor biopsies confirmed the mechanism of action of VT1021 to induce expression of TSP-1 in MDSCs and reprogram the TME from immunologically cold to hot. Taken together these findings support the clinical advancement of VT1021 into phase 1b/ II single agent and/or combinatorial studies.
The clinical activity of VT1021 in the dose escalation portion of this phase 1 study indicates the potential efficacy in select solid tumor indications which typically harbor an immunologically cold TME and for which treatment with drugs such as checkpoint inhibitors have shown very little benefit 16 . One patient with metastatic thymoma, the only patient with this indication in this study, achieved PR after the second cycle of treatment, and was on study for 504 days, while 11 patients with other solid tumors including pseudomyxoma peritonei ( n = 1), leiomyosarcoma ( n = 1), appendiceal ( n = 1), uterine ( n = 1), pancreatic ( n = 1), uterine carcinosarcoma ( n = 1), kidney ( n = 1), colorectal ( n = 2), and ovarian ( n = 2) had SD. Additionally, exploratory analysis suggests that dual high expression of CD36 and CD47 may predict response. Among 9 patients with dual high expression of CD36 and CD47, 8 patients achieved SD.
Exploratory pharmacodynamic studies on paired tumor biopsies (pre-study and on-study) confirmed the mechanism of action of VT1021. The induction of TSP-1 was observed in all the tested biopsy samples. Although the number of available biopsy pairs was low, modulation of the TME from immunologically cold to hot, another hallmark of VT1021 activity, was also observed by augmented levels of active tumor-killing immune cells and lower levels of immunosuppressive cells in a majority of on-study biopsies. Although VT1021 thus far has been shown to be safe and well tolerated in patients with advanced solid tumors, there are several limitations to this study which will be addressed in future clinical trials such as increasing the number of patients in the treatment group, requiring pre-treatment biopsies from all patients prior to enrollment, and optimizing the dose of single-agent VT1021.
VT1021 is the first clinical-stage molecule that functions by stimulating the expression of TSP-1 in the TME. The stimulation of TSP-1 simultaneously targets both CD36 and CD47 harnessing the full anti-tumor activity of TSP-1. Other drugs have attempted to exploit the anti-tumor activity of TSP-1 by utilizing small regions of the protein 17 , 18 developed to target CD36 and CD47 individually 15 . VT1021, however, induces the production of endogenous, localized, full-length TSP-1 in MDSCs, potentially improving TSP-1-dependent efficacy. Expression of full-length TSP-1 causes tumor reduction by CD36-dependent induction of apoptosis in tumor cells and endothelial cells. TSP-1 also blocks the CD47-SIRPα “do-not-eat-me” macrophage checkpoint to enable phagocytosis of tumor cells 19 , 20 .
The unique ability of VT1021 to target both CD36 and CD47 concurrently underscores the novel, first-in-class status of this molecule. The expansion phase of this study in selected solid tumor cohorts has recently been completed and results from this, as well as exploration of potential predictive and pharmacodynamic biomarkers, will be reported separately once survival data is more mature. VT1021 is currently in a global registration-ready clinical study (AGILE) for both newly diagnosed and recurrent GBM patients. Additional studies have been planned for other solid tumor indications, as single agent, and as part of the combination regimens with standard of care chemotherapies and immune checkpoint inhibitors. | Background
VT1021 is a cyclic peptide that induces the expression of thrombospondin-1 (TSP-1) in myeloid-derived suppressor cells (MDSCs) recruited to the tumor microenvironment (TME). TSP-1 reprograms the TME via binding to CD36 and CD47 to induce tumor and endothelial cell apoptosis as well as immune modulation in the TME.
Methods
Study VT1021-01 (ClinicalTrials.gov ID NCT03364400) used a modified 3 + 3 design. The primary objective was to determine the recommended Phase 2 dose (RP2D) in patients with advanced solid tumors. Safety, tolerability, and pharmacokinetics (PK) were assessed. Patients were dosed twice weekly intravenously in 9 cohorts (0.5–15.6 mg/kg). Safety was evaluated using CTCAE version 5.0 and the anti-tumor activity was evaluated by RECIST version 1.1.
Results
The RP2D of VT1021 is established at 11.8 mg/kg. VT1021 is well tolerated with no dose-limiting toxicities reported (0/38). The most frequent drug-related adverse events are fatigue (15.8%), nausea (10.5%), and infusion-related reactions (10.5%). Exposure increases proportionally from 0.5 to 8.8 mg/kg. The disease control rate (DCR) is 42.9% with 12 of 28 patients deriving clinical benefit including a partial response (PR) in one thymoma patient (504 days).
Conclusions
VT1021 is safe and well-tolerated across all doses tested. RP2D has been selected for future clinical studies. PR and SD with tumor shrinkage are observed in multiple patients underscoring the single-agent potential of VT1021. Expansion studies in GBM, pancreatic cancer and other solid tumors at the RP2D have been completed and results will be communicated in a separate report.
Plain language summary
It may be possible to treat cancers with therapies that modify the tumor microenvironment. This is the environment in the body in which tumors survive and grow and is composed of different types of cells. One such potential therapy is VT1021. Here, we conduct the first clinical trial to test this therapy in patients. We identify the optimal dose of the treatment to take into further studies, finding that VT1021 is safe and well tolerated by patients. We see some signs that the treatment is working in some patients and see evidence of modification of the tumor microenvironment. These findings help to inform further clinical trials of VT1021 to determine whether it is safe and effective in larger cohorts of patients.
Mahalingam et al. report findings from a first-in-human dose escalation study of the tumor microenvironment modulator VT1021 in patients with advanced solid tumors. VT1021 is found to be safe and well tolerated and the recommended phase II dose is established based on pharmacokinetic/dynamic properties and preliminary clinical activities.
Subject terms | Supplementary information
| Supplementary information
The online version contains supplementary material available at 10.1038/s43856-024-00433-x.
Author contributions
Patient enrollment and study monitoring: D.M., W.H., A.P., A.B. Development of methodology for biomarker studies: S.W., M.Y.V., J.J.C., J.W. Acquisition of data: S.W., M.Y.V., H.P., J.C., M.C. Analysis and interpretation of data: S.W., M.Y.V., J.J.C., R.S.W., J.C., J.M., M.C., J.W. Study supervision: M.C., J.W.
Peer review
Peer review information
Communications Medicine thanks Jan Rekowski, Renuka Iyer, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. A peer review file is available.
Data availability
All data supporting the findings of this study are available within the paper and its Supplementary Information. Individual participant data that underlie the results reported in this paper after deidentified will be shared upon request. Study protocol has been provided in Supplementary Information. Source data for the figures are available as Supplementary Data 1 and Supplementary Data 2 . Additional data is available from the corresponding author upon reasonable request. Data requests submitted by researchers who provide a methodologically sound proposal will be accepted beginning immediately following publication and ending 5 years following publication.
Competing interests
Vigeo Therapeutics designed and sponsored the clinical study in this article. R.S.W. is a co-founder of, and consultant for, Vigeo Therapeutics, which has licensed technology from Boston Children’s Hospital. M.Y.V., J.J.C., S.W., H.P., J.C., J.M., M.C., and J.W. are employees of Vigeo Therapeutics. D.M., W.H., A.P., and A.B. declare no competing interests. | CC BY | no | 2024-01-15 23:41:59 | Commun Med (Lond). 2024 Jan 13; 4:10 | oa_package/9b/02/PMC10787778.tar.gz |
|
PMC10787779 | 38218994 | Introduction
In humans, the superfamily of Cytochrome P450 (CYP) enzymes comprises 18 families of heme-containing proteins that belong to the group of oxidoreductases. Enzymes of only three CYP families are involved in hepatic drug metabolism. The name CYP450 derives from its characteristic light absorbance attribute that is caused by the inherent heme group 1 . Due to their almost unique properties, reaction mechanism and long evolutionary history, these enzymes have been studied intensely in various aspects since their discovery 2 , 3 . These studies had great practical benefits and allow for example the use of most drugs in medicine 4 .
Eukaryotic CYPs can be found in different cell organelles, mainly in the endoplasmic reticulum 5 , 6 . Since CYPs are the only human enzymes capable of catalyzing hydroxylation of non-activated carbon atoms, they have a very broad and overlapping substrate specificity and additionally they form a variety of isoforms 7 . In hepatocytes, CYPs mainly process xenobiotics, thereby conducting the first step of their body excretion 8 . Many drugs, which are ultimately also xenobiotics, are also metabolized by CYPs. The importance of CYPs for drug development and validation is therefore far-reaching and has already been described in detail in various reviews 9 – 11 . Due to individual CYP polymorphism, CYPs are also an important factor in the implementation of individualized medicine, particularly in the dosage of pharmaceuticals 12 . The majority of hepatically metabolized drugs involves the CYP enzymes CYP1A2, CYP2B6, CYP2C9, CYP2C19, CYP2D6 and CYP3A4/5 accounting for more than 79% of drug oxidation 13 , 14 . Some drugs are designed as prodrugs to be bioactivated by CYPs in order to form an active component 15 . On the other hand, CYPs significantly determine the half-life of certain individual drugs. In both cases, the drug concentration in an organism is highly dependent on CYPs. Additionally, drugs themselves can act as activators and inhibitors for CYPs. Therefore, interference of the drugs with the enzymatic pathway of CYPs must be considered. Consequently this interference may result in malfunction of the metabolism mechanisms and therefore leading to severe side-effects 14 .
Since CYPs have an extensive influence on drug metabolism, a reliant screening mechanism with specific CYPs for drug discovery would facilitate the investigation of the metabolic fate of novel drugs as well as of CYP-inhibitors and modulators. CYP inhibition as well as induction can lead to failures of several drugs and a consequent withdrawal from the market. This issue has also been addressed in a comprehensive publication 14 . To circumvent such issues, screenings can be performed by using human liver cell microsomes from primary hepatocytes, representing the physiological CYP spectrum or more CYP specific by using microsomes derived from genetically modified cells that only express one CYP, so called mono CYP microsomes. The usage of primary hepatocyte microsomes for CYP analysis as well as drug metabolization studies bears the disadvantage to address all variants of endogenous CYPs simultaneously, thereby impeding mono CYP analysis. The recombinant expression of CYPs has already been extensively investigated 16 . However, mammalian CYPs in particular are difficult to produce. Restrictions are caused by the complex co-factor requirements, like the heme group and the availability of the redox partner, cytochrome P450 oxidoreductase (CPR) 17 .
Bacterial systems such as Escherichia coli are probably the most popular platform for recombinant protein expression, due to their straightforward handling and high growth rate. However, the synthesis of complex eukaryotic proteins like CYPs is unfavorable since they can only be expressed in a modified soluble form 16 , 18 . In contrast to prokaryotes, yeast as well as higher eukaryotes like insect and mammal cells, possess organelles like the endoplasmic reticulum and the Golgi apparatus, enabling proper anchoring of membrane bound proteins. Usually the redox partner NADPH-Cytochrome-P450-oxidoreductase (CPR) is co-expressed to ensure CYP activity 19 – 21 . Additionally the co-expression of the chaperon led to an increased yield of active protein for some CYPs by supporting the folding mechanism 22 . The co-expression of auxiliary factors also plays a role likewise in prokaryotic and in eukaryotic recombinant expression systems.
For industrial purposes a common expression host for recombinant eukaryotic CYPs is Saccharomyces cerevisiae , for example for the large scale production of the antimalarial artemisinin through the coexpression of a CPR, CYP71AV1 from Artemisia annual and other enzymes 23 . Also, mammalian CYPs have been used to design a biosynthetic pathway, including 4 CYPs, in yeast for the generation of hydrocortisone from simple carbon sources 24 . When it comes to mammalian expression systems several liver cell-lines have been shown to be capable of CYP overexpression. However, these cell lines come with the disadvantage of background CYP activity. This problem can be circumvented by functional overexpression of CYPs together with CPR in CHO or HEK cells as shown in recent studies 25 , 26 . However, harnessing the advantages of eukaryotic systems for the cell-free synthesis of CYPs remains an unexplored field. Cell-free protein synthesis (CFPS) has the potential for flexible and adjustable analysis of individual CYPs. Taking advantage of endogenous membrane structures eukaryotic cell-free systems have been successfully used for the synthesis of other membrane localized proteins 27 . In contrast to cell-based expression systems, the CYP translation process is directly accessible. This open system can be directly manipulated and allows the straightforward supplementation of additional components to the reaction like heme and heme precursors (δ-aminolevulinic acid, glucose, glycine, as well as different iron species) as well as heme-producing enzymes to receive active CYPs 28 . Additionally, it is possible to modify the cells that are used for lysate production similar to cell-based systems. Hereby, CPR can be integrated into the endogenous microsomes in advance to create the suitable reaction environment for CYPs. The stable modification of eukaryotic CHO cells with CPR has been realized earlier 20 . The desired CYP enzyme can be synthesized in the translationally active modified CHO-CPR lysate in a straightforward manner. Subsequently, various screening assays can be performed without any purification or further processing steps in 96 and 384- well plates. In this context advantages and limits of cell-free synthesis for drug development have been addressed recently 29 . | Methods
Template generation
Templates for the synthesis of CYPs in cell-free systems were generated by Biocat GmbH. The protein encoding sequence and further regulatory factors for CAP-independent protein synthesis by using a Cricket paralysis virus-IRES 30 (Gene number 714916-1/2/3, 724709-12) was integrated in a pUC57-1.8k-vector backbone.
Cell fermentation, lysis and lysate procession
Suspension adapted Chinese Hamster Ovary cells (CHO-K1) were routinely cultivated in ProCHO5 medium (Lonza Group AG, Basel, Switzerland) supplemented with 6 mM l -alanyl- l -glutamine (Merck, Darmstadt, Germany). CHO suspension cells were cultured in non-baffled flasks (Corning, New York, USA) at 37 °C and 5 vol-% CO 2 at 100 rpm on an orbital shaker. CHO cells were grown in suspension cultures in shaking flask to a maximal volume of 500 mL or in a 5 L bioreactor. CHO cells were harvested at a density of approximately 4 × 10 6 cells/mL. During incubation in the fermenter, viability, oxygen concentration, pH and cell density were monitored. Cell washing, lysis and lysate processing were performed as described earlier 27 , 31 , 32 . In short, cells were centrifuges at 200× g for 10 min, and the pellet washed with 40 mM HEPES–KOH (pH 7.5), 100 mM NaOAc and 4 mM DTT. The pellet was then resuspended in the same buffer at a density of approximately 5 × 10 8 cells/mL. Cell-disruption was performed by syringing the suspension through a 20-gauge needle. After a final centrifugation step at 10,000× g for 10 min the supernatant was applied to a size-exclusion chromatography column (Sephadex G-25, GE Healthcare, Freiburg, Germany) and elution fractions with high RNA content were pooled. Residing mRNA was digested by addition of 10 U/mL micrococcal nuclease S7 (Roche, Mannheim, Germany) and 1 mM CaCl 2 . After incubation for 2 min 6.7 mM EDTA (f.c.) were added. Finally, the lysate was immediately shock-frozen and stored at − 80 °C.
Lysates were prepared from CHO-K1 cells. Additional to the CHO-K1 wild type cell line, lentiviral modified CHO cells that either express human CPR (CHO-CPR) or human CPR together with CYP3A4 (CHO-CPR/CYP3A4) were used. Blasticidin (Biovision GmbH, Ilmenau, Germany) (3 μg/mL, resistance of the CPR expression vector) or Blasticidin and Zeocin (Abcam, Cambridge, UK) (300 μg/mL, resistance of the CYP3A4 expression vector) were added to the culture medium, to maintain the expression of human CPR or CPR/CYP3A4 in the corresponding CHO cell lines. The lysis process for the generation of translationally active lysates was the same as for wild type CHO-K1 cells.
Cell-free protein synthesis
Synthesis of proteins in translationally active lysates derived from cultured Chinese hamster ovary (CHO) cells and its modified variants CHO-CPR and CHO-CPR/CYP3A4 cells, was performed in batch based systems as previously described 27 . Accordingly designed, plasmids suitable for cell-free protein synthesis (CFPS), coding for the CYP of interest, were applied as template. T7-RNA-Polymerase, amino acids, an energy regeneration system and other supplements were added to the translationally active lysates with the additional supplementation of 5 μM heme (porcine) (Alfa Aesar Haverhill, Massachusetts, USA) to the reaction and a reaction temperature of 24 °C was set unless noted otherwise. For the isolation of microsomes the translation mixture (TM) was centrifuged at 16,000× g for 10 min at 4 °C. The pellet was resuspended in the same volume of PBS to receive the microsomal fraction (MF). The microsomal fraction comprises the endogenous microsomes derived from the endoplasmic reticulum including the de novo synthesized membrane bound proteins.
Protein yield determination
To validate successful cell-free protein synthesis, radioactive labeling of de novo synthesized proteins with 14 C-leucine was performed enabling qualitative characterization by autoradiography and quantitative analysis through scintillation counting as described earlier 33 . Disintegrations per minute (dpm) were measured by liquid scintillation counting performed using the Hidex 600 SL (Hidex). Protein yields were calculated based on the dpm, the molecular weight of the synthesized protein, the specific radioactivity A spec (Eq. 1 ) and the total number of leucines in the target protein (Eq. 2 ).
Acetone precipitation, SDS-PAGE and Autoradiography
Sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and autoradiography were used to analyze homogeneity and molecular weight of in vitro translated proteins. 45 μL water were added to 5 μL of the sample and precipitated with 150 μL ice cold acetone at 4 °C for at least 15 min. Precipitated proteins were pelleted at 16,000 × g for 10 min at 4 °C. Protein pellets were dried for 1 h at 45 °C and re-suspended in 20 μL LDS sample buffer. The samples were loaded onto 10% SDS-PAGE gels. SDS-PAGE was performed at 150 V for 1 h. The gels were stained for 1 h using SimplyBlue—SafeStain, and destained in water over night. The gels were dried (Unigeldryer) for 70 min at 70 °C. The dried gels were put on a phosphor screen for at least three days. Radiolabeled proteins were visualized on the Amersham Typhoon laser scanner (GE Healthcare).
Western blot
Western blotting and subsequent antibody detection were used for the identification of endogenous and de novo synthesized CYP3A4 and CPR in the translation mixture of the cell-free synthesis reaction. SDS-PAGE was performed like described above. Proteins were blotted on a PVDF membrane with an iBlot device (Thermo Fisher Scientific, Waltham, Massachusetts, USA). The membrane was washed three times with TBS and subsequently blocked with 2% Bovine Serum Albumin (BSA) (Carl Roth GmbH + Co. KG, Karlsruhe, Germany) over night at 4 °C. After three washing steps with TBS/T, the membrane was incubated with the primary antibody at a concentration of 0.4 μg/mL in 2% BSA for three hours at room temperature. The blot was washed three times with TBS/T and incubated with a secondary Horse Radish Peroxidase (HRP) linked antibody at a final concentration of 0.5 μg/mL in 2% BSA at room temperature for one hour. Three final washing steps in TBS/T were performed. Chemiluminescent signals were detected after incubation with ECL detection reagent. The primary antibody used for the detection of CPR was “CYPOR (F-10): sc-25270” (Santa Cruz Biotechnology, Dallas, Texas, USA), the primary antibody used for the detection of CYP3A4 was “CYP3A4 (HL3): sc-53850” (Santa Cruz Biotechnology, Dallas, Texas, USA).
Fluorescence microscopy
Confocal laser scanning microscopy was used to analyze protein translocation. In preparation, the microsomal fraction was separated from the rest of the translation mixture as described above. 5 μL of the MF were diluted in 15 μL PBS and transferred on chambered Coverslips (ibidi GmbH, Gräfelfing, Germany), The samples were analyzed by confocal laser scanning microscopy using a LSM 510 Meta (Zeiss). Therefore, samples were excited with an argon laser at 488 nm, and the emission signals were recorded with a bandpass filter in the wavelength range from 505 to 550 nm. Photobleaching was performed using an argon laser at 488 nm with 100% laser intensity. After photobleaching pictures were taken each minute for 14 min.
CPR activity assay
CPR activity was determined by the NADPH dependent conversion of the water-soluble tetrazolium salt WST-8 using the “Cytochrome P450 Reductase Activity Assay Kit” (Abcam, Cambridge, UK). The assay was performed according to the manufacturers protocol. The activity was determined directly in the translationally active lysate of wild type (wt) CHO cells and CHO-CPR cells. Additionally, the microsomal fraction was isolated as described previously. The activity was quantified using a calibration curve that was generated with supplements supplied by the kit.
CYP activity assays
For CYP activity measurement, “P450-GloTM Assays” (Promega, Madison, Wisconsin, USA) were used. CYP1A2 activity was detected by Methoxy-Luciferin (Luciferin-ME) turnover (V8772), CYP2B6 was detected by Dimethoxybutyl-Luciferin (Luciferin-2B6) turnover (V8321) and CYP3A4 was detected by Luciferin isopropyl acetal (Luciferin-IPA) turnover (V9001). The reaction was performed according to the Promega “P450-GloTM Assays” protocol except the CYP reaction time was prolonged to 1 h unless otherwise noted. The reaction temperature was set at 37 °C. The NADPH Regeneration System (V9510) was used for the supply of NADPH during the assay. Three control approaches were performed, one with buffer control, one designed as no template control and one control, using human liver cell microsomes (GibcoTM Human Microsomes, 50 Donors) (Thermo Fisher Scientific, Waltham, Massachusetts, USA) as positive control. Human microsomes were tested at a final concentration of 0.4 mg/mL. If not otherwise noted 5 μL of the microsomal fraction of the cell-free reaction were applied as samples to the activity assay. For response condition adjustments, CYP activities were usually expressed as percentage of the highest CYP activity during the assay. For CYP activity quantification, a standard curve was prepared using beetle luciferin (Promega, Madison, Wisconsin, USA) according to the protocol.
Indirect substrate screening
Luciferase-based assays were also used as a preliminary screening procedure for the turnover of various potential CYP substrates. The substrates testosterone (Sigma-Aldrich Chemie GmbH, Taufkirchen, Germany), midazolam (Sigma-Aldrich Chemie GmbH, Taufkirchen, Germany), efavirenz (Fisher Scientific GmbH, Schwerte, Germany), and phenacetin (Fisher Scientific GmbH, Schwerte, Germany) were solved at a concentration of 3 mM in 100% methanol. Cholesterol as a steroid that is not known to be a CYP substrate of the selected CYPs, was used as control substrate and was prepared in the same way as the substrates. Cell-free CYP synthesis and isolation of the microsomal fraction was performed as described above. The luciferase based CYP activity assay was performed using 5 μL of the microsomal fraction of the cell-free synthesis of CYP1A2, CYP2B6 and CYP3A4. A final concentration of 200 μM of the analyzed substrate was added to the CYP reaction in parallel to the specific luciferase substrate. A vehicle control with methanol was performed to exclude an influence of the solvent. Changes in the turnover of the luciferase product indicate an interaction of the test substrate with the tested CYP. Changes in luminescence signal were expressed in percentage with reference to the result from the batch without addition of a substrate.
Statistical analysis
Excel Data Analysis tools were used for statistical analysis, especially to test for statistical significance between two independent samples. After F-test for variance, a variance corresponding t-test was performed for records that were normally distributed. | Results
CYP3A4 is known to be involved in the metabolism of most approved medications. Consequently, it was selected as a model protein for initiating cell-free synthesis of cytochrome P450s in eukaryotic lysates.
Generation of a modified CHO-CPR Lysate
Modified CHO-K1 cell-lines were cultivated similar to wild-type CHO-K1 cells described earlier 27 . Using a lentivirus vector system, CHO-CPR and CHO-CPR/CYP3A4 cell clones were generated. A doubling rate of about 48 h of CHO-CPR and CHO-CPR/CYP3A4 compared to the wild type cell line with a doubling rate of about 24 h slowed down the process, but did not prevent the achievement of sufficiently high cell densities. The different cell-lines were harvested in the exponential growth phase. Typical growth conditions in the fermenter are shown exemplarily for CHO-CPR cells (Appendix). To retain translocationally active microsomes in the lysate, cells were mildly disrupted using a 20-gauge syringe. After buffer exchange and supplementation of the raw lysate, translational active lysates of the modified cell lines were prepared similar to the process of wild type cell lysate generation. With total target protein yields of around 40 μg/mL at standard conditions, the protein translation in the modified lysates is in the same range as observed for the typical CHO based cell free protein synthesis. After validation of translational activity, the lysates from CHO-CPR cells and CHO-CPR/CYP3A4 cells were additionally analyzed for their CPR and CYP activity.
Validation of CPR activity in the generated CHO-CPR-lysates
CPR acts as co-enzyme for the analyzed CYPs, therefore it is mandatory for their activity. By engineering CHO-K1 cells a more than threefold increased CPR activity could be detected (Fig. 1 A). The processed lysates from CHO-CPR cells were centrifuged to separate the endogenous microsomes from the soluble components of the lysate at 16,000× g for 10 min. Activity in the CPR-Assay was drastically improved compared to CHO wild type cells. The activity can be detected in particular in the microsomal fraction with about 90% of the total activity in the translation mixture (Fig. 1 B). While this increase in activity can be linked to the overexpression of CPR, the residual low activity measured in the supernatant fraction could as well stem from soluble cytoplasmic reductases such as novel reductase 34 . Moreover, the signal in the supernatant might result from non-pelleted smaller vesicles in the CHO system, that need a higher centrifugation speed 35 .
To further characterize the CPR activity of the lysates on CYPs, cell-free synthesized CYP3A4 was produced in the translationally active lysates. CYP3A4 served as model protein for the cell-free synthesis of CYPs.
Cell-free synthesis of CYP3A4 in CHO-CPR lysates
CYP3A4 was produced in a batch-based cell-free synthesis. The synthesis was performed using three different lysates: a wild type CHO lysate, a lysate from the aforementioned CPR-expressing CHO cell line and a lysate from CHO cell line expressing CPR as well as CYP3A4. In each lysate a negative control cell-free reaction without the addition of any DNA template (no-template-control = NTC) was performed. The presence of CPR and CYP3A4 in the translation mixture from each batch reaction was visualized via antibody detection on a western-blot (Fig. 2 A,B). In the anti-CPR western blot, well-defined bands are detectable at approximately 90 kDa. However, these are less prominent in the wild-type CHO lysate compared to the modified lysates (Fig. 2 A). A well-defined band at ~ 55 kDa and a second 50 kDa side-band in the anti CYP3A4 western blot can be detected in any sample where CYP3A4 has been synthesized in a cell-free manner (Fig. 2 B). In addition, a much weaker band at ~ 55 kDa can be detected in the NTC of CHO-CPR-CYP3A4 lysates. Besides the western-blot autoradiography was used to visualize to cell-free synthesized CYP3A4 labeled with 14 C-leucine (Fig. 2 C). Similar to the anti-CYP3A4 western-blot, a well-defined band at the level of about 55 kDa with a 50 kDa side-band was detected in the samples containing the DNA template. However, no bands in any NTC were observed.
A CYP3A4 specific luminescent assay (Luminescent Assays and Screening Systems for Measuring CYP Activity (Promega, Madison, USA)) was performed for enzyme activity determination. CHO-WT lysate already shows production of active CYP, while producing a low background signal. This activity could be increased by co-synthesizing CPR in the cell-free reaction (Fig. A2 ). However, by far the most CYP3A4 activity was measured in CHO-CPR Lysate after cell-free CYP3A4 synthesis, that notably exceeds the activity of CYP3A4 in the CHO-WT-lysate (Fig. 2 D). Alternatively, the cell-free synthesis of CYP3A4 in an insect lysate was explored. This led to comparable activities, while exhibiting a higher background signal (Fig. A2 ).
Adaptations of reaction conditions
Heme is a cofactor of CYPs and is therefore indispensable for its function. Adaption of the amount of supplemented heme to the cell free reaction is therefore mandatory. Heme was supplemented in different concentrations to different batches of the cell-free reaction. The CYP activity in the MF was determined by the Luciferase based CYP3A4 activity assay. A heme concentration of 5 μM resulted in the highest CYP3A4 activity, which was more than twofold higher compared to the control without supplementation (Fig. 3 ).
The supplementation of higher concentrations of heme results in equally 60% reduced activity compared to the 5 μM heme supplemented sample. The concentration of 5 μM heme was used in all subsequent batches. For an increased CYP activity, the cell-free reaction temperature was adapted to 24 °C instead of 30° usual for CHO-CFPS (Fig. A3 ).
Localization, yield and activity of cell-free produced CYP3A4
CYPs are membrane associated proteins, but in contrast to most trans-membrane proteins they are only N-terminally anchored to the membrane and have a partially lipophilic surface that is oriented towards the membrane. The translocation process therefore differs from other membrane proteins that have already been produced successfully by CHO-based cell-free protein synthesis (CFPS). Consequently, localization and the influence of signal sequences are an important issue for the cell-free synthesis of CYPs. The localization of the cell-free produced CYPs was analyzed using confocal laser scanning microscopy. For this purpose, templates for CYP3A4-eYFP fusion proteins were generated. Additionally, a template containing a melittin signal sequence upstream of the transmembrane segment (Mel-CYP3A4-eYFP) was generated. Both templates were used for cell-free protein synthesis in the modified CHO-lysates. Fluorescence microscopy reveals a distinct difference of CYPs produced with the Mel containing template compared to the template without the Mel signal sequence. The CYPs harboring the signal sequence are preferentially localized at the endogenous microsomes (Fig. 4 A). According to the yield determination, the addition of a melittin signal sequence led to an increased rate of translocation of 40% (Fig. 4 C). However, despite the higher protein yields the volumetric activity only increased slightly (Fig. 4 B).
The co-localization was further analyzed by comparing the microsomal fraction of the CYP-sample with the microsomal fraction of an NTC to which the supernatant fraction of the sample was added and incubated for about an hour. To determine if functional posttranslational translocation into the microsomes in fact was present, this sample was analyzed using fluorescence microscopy (Fig. A4 ) and the activity assay. Usually, a co-translational translocation would be expected; however, fluorescence microscopy reveals a similar image as in the microsomal fraction. The overall intensity of the fluorescence signal seems to be lower than in the MF. The CYPs of the supernatant fraction are not active despite being co-localized with the microsomes of the NTC batch (Fig. 5 B) and despite displaying a higher target protein yield than the CYPs in the microsomal fraction (Fig. 5 A).
Since yields of active protein could not be improved the data implies, that a notable amount Mel-CYP3A4 is produced inactive, further experiments were performed using CYP without the melittin signal peptide.
Synthesis of different CYPs and turnover of pharmaceutically relevant CYP substrates
The application of cell-free protein synthesis enables the time-saving synthesis and analysis of different proteins via template exchange. Besides CYP3A4, CYP1A2 and CYP2B6 were synthesized in the modified CHO cell-free system using the same adapted reaction conditions. Yield determination was executed by scintillation counting of 14 C-labeled protein. Additional activity assays were performed using the corresponding luciferase based assay (Fig. 6 A). All CYPs were active in the microsomal fraction with almost zero background. Cell-free produced CYP2B6 had the highest activity (15 μU/mL) followed by CYP3A4 (4 μU/mL) and CYP1A2 (2 μU/mL).
An indirect activity assay using the luciferase-based assay will identify potential CYP substrates and inhibitors in a screening procedure. For this purpose, various known pharmaceutically relevant CYP substrates (testosterone, midazolam, efavirenz and phenacetin) were used as a proof of principle. CYP1A2, CYP2B6 and CYP3A4 were cell-free synthesized in modified CHO-CPR lysates. Microsomes containing the CYPs were isolated and applied to the corresponding luciferase-based activity assay. CYP substrates to be analyzed were added to the mono-CYP microsomes in the activity assay. All substrates were added at a final concentration of 200 μM each. A sample without additional substrate (vehicle) and an additional sample with cholesterol as non-interacting control substance with the respective CYPs, were prepared as a reference. Due to competitive turnover, adding of CYP substrates should lead to a decrease of luciferase signal due to competitive substrate turnover in batches with interacting substrates (Fig. 6 B).
In the competitive assay CYP1A2, CYP2B6 and CYP3A4 were analyzed by their activity changes after adding different CYP substrates. CYP1A2 assay luciferase activity was reduced by all tested substrates while midazolam and efavirenz had the highest impact. CYP2B6 assay luciferase activity reduction was only observed after addition of efavirenz. The supplementation of testosterone led to a 100% increase of monooxygenase activity for CYP2B6. CYP3A4 assay luciferase activity was drastically reduced by testosterone, midazolam and efavirenz. | Discussion
Recombinant expression of membrane proteins has been challenging for many years 36 , 37 . More than half of all pharmacologically relevant proteins are membrane-bound 38 . Therefore, an outstanding interest in the development of efficient procedures to produce a wide variety of functional membrane proteins exists. Recent progress in CFPS lead to the successful synthesis of various toxic and membrane bound proteins accessible for research and development 29 , 39 – 42 . However, there are only a few studies on CYPs, one of the pharmaceutically most relevant groups of membrane proteins. Recombinant expression of these heme-containing, membrane bound oxidoreductases was attempted frequently in several research studies with some success 16 , 20 , but partially limited due to the lack of cofactors and a suitable membrane environment, especially for prokaryotic expression systems 28 . However, several commercially available products indicate that there is currently unpublished progress and certainly a demanding interest in CYP production, for example by companies such as Hypha discovery, Xenotech, Merck and Thermo Fisher. Until now, research on cell-free protein synthesis based CYP production is only poorly covered. Cell-free protein synthesis based on vesicle containing eukaryotic cell-extracts allows for the precise development of convenient CYP substrate screening systems. In this context the availability of ER originating and CPR harboring endogenous microsomes, which can be programmed with individual CYPs by cell-free synthesis, is of outstanding advantage.
The electron transfer of CPR is mandatory for the activity of CYPs, therefore, a closer look at CPR localization and activity in translationally active lysates is of fundamental importance 17 , 43 . Despite CPR activity was detected in the wild type-CHO cells and their lysates per se, an increased CPR activity was detected in CHO lysates derived from the CHO-CPR cell line overexpressing human CPR. The use of CHO cells specifically designed for CYP synthesis and in particular to produce CPR-enriched CHO-CPR lysates, led to a threefold boost of CPR activity due to its overexpression. The microsomes in the CHO lysates originate from the ER of the cells in which CPR and most CYPs are naturally located 44 . Therefore, a natural-like translocation that led to correct localization and folding of CPR in the microsomes can be assumed. Consequently, the generated CHO-CPR lysates are optimally suited for the production of a variety of CYPs. In our study CYP3A4 was used as model CYP for the characterization of the generated CHO-CPR lysates, since it is the most frequently analyzed CYP and responsible for the majority of phase-I xenobiotic and especially drug metabolization in the human liver 11 . Cell-free synthesis of CYP3A4 in the modified lysates led to a fourfold increase of total CYP3A4 activity per volume cell-free reaction compared to synthesis in conventional lysates.
The exploitation of fast and convenient high-throughput screening systems for biomolecules is one of the most remarkable advantages of open cell-free systems 45 . This platform technology enables, for example, the synthesis of different CYPs as well as different CYP variants without time-consuming cloning and fermentation steps 41 . Cell-free protein synthesis based on CHO lysates in this context is a promising technology for various applications, including in vitro drug screening platforms, CYP-specific metabolite phenotyping and synthesis, pharmacologically relevant toxicological studies through to diagnostic applications. The use of CHO cell-lines in protein production is widely established in manifold processes that require a highly evolved eukaryotic expression system. CHO cell-based systems enable the synthesis of complex membrane embedded proteins. For the first time this is shown here in a hybrid model qualifying cell-based and cell-free protein synthesis methods side by side. The lack of CYP background activity in CHO cells 25 is an additional advantage of this particular cell line for defined CYP applications. Luciferase-based assays are well suited to quantify ratios of CYP activities in different approaches 46 and quantifications of substrate turnover can additionally be determined by using mass spectrometry 47 . Western blot analysis shows that the amount of cell-free synthesized CYP is significantly higher than in the parallel cell-based approach using CYP-overexpressing CHO-CPR/CYP3A4 cells. To clearly determine if the double band that is observed in the cell-free samples stems from an alternative translation start, a premature termination or has other causes would ultimately need mass spectrometric analysis of the target protein. The difference in the overall synthesis level seems to be even more pronounced than the difference in activity. This may be due to incomplete membrane integration, misfolding or aggregation of a certain amount of cell-free synthesized CYP. Consequently, there is high potential for adaptation of the reaction parameters resulting in the optimal CYP synthesis conditions with further increase CYP specific monooxygenase activity. A prerequisite for an even more efficient synthesis of membrane proteins is a better understanding of the mechanism of translocation in a eukaryotic cell-free protein synthesis system. Translocon interactions and the entire translation process during co-translational translocation, which is essential for the correct localization and the best possible activity of CYPs, are of particular importance 48 , 49 . Additionally the lipid composition has a significant influence on CYP activity, especially in the context of the enzyme`s hydrophobic substrates 50 .
Besides CPR, heme is the most important co-factor of CYPs, that is mandatory for CYP function 51 . Sufficient availability of heme during cell-free synthesis reaction is of key importance. However, high heme concentration can lead to a decrease in protein activity due to its hydrophobicity and reactivity 52 , 53 , which also has a negative effect on the total amount of active CYPs. A certain basic concentration of heme might already be present in the cell-free system, since basal CYP activities can be measured even without the further addition of heme 53 . Interestingly above 5 μM a plateau below the optimum is reached. A similar observation was made for the synthesis of unspecific peroxygenases in an insect-cell-free system 54 . Since in both cases the heme supplementation had no influence on the translation efficiency in the analyzed concentration range, there seems to be a more intricate underlying mechanism potentially affecting protein folding. By using confocal microscopy, the co-localization of fluorescently labeled CYPs and microsomes can be detected. An addition of the melittin signal sequence to the template increased the effect of apparent translocation that could be observed during microscopy. The target protein yield in the microsomal fraction determined by radioactive labeling confirms an increased CYP concentration in the microsomal fraction using the melittin signal sequence. This is in accordance to results observed for several other cell-free synthesized secretory and membrane proteins 55 , 56 . However, the addition of the melittin signal sequence led only to a minor increase of total volume activity of CYP3A4 but lead to an accumulation of inactive CYP. Translocation efficiency is therefore probably not the limiting factor for more efficient cell-free CYP production. Future studies may identify the remaining restrictions thereby increasing the amount of holo CYPs.
One of the main goals of cell-free CYP synthesis is the development of a screening system 41 , allowing the parallel analysis of different CYPs. As a proof of principle the human CYP1A2, and CYP2B6 were synthesized showing the straightforward expandability of the cell-free system to CYPs from other gene families. With 10% (CYP1A2), 5% (CYP2B6) and 20% (CYP3A4) participation in CYP metabolism, these CYPs are among the most important representatives of their respective gene families in research and industry 5 . Transcriptome data suggests, that no homologs of these three human CYPs are expressed in CHO cells 57 . Accordingly, as for CYP3A4, no significant activity of the other CYPs was measured in lysates of parental CHO or CHO-CPR before the CYP synthesis. The absence of a background CYP activity also demonstrates for CYP1A2 and CYP2B6 how well the CHO cell-free system is suited for specific CYP synthesis and thereby for the generation of mono-CYP microsomes.
The turnover of pharmaceutically relevant CYP substrates by cell-free produced CYPs could be detected indirectly, by analyzing the competitive turnover in the luciferase substrate-based CYP assays. Interactions of a tested CYP with defined substances results in a change in the luciferase assay activity, which was observed here for all three CYPs for several known substrates/substances.
The sterol hormone testosterone is probably the best studied substrate especially concerning CYP3A4 58 , 59 . Interactions of CYP3A4 with midazolam and efavirenz 60 , 61 were also confirmed during the assay. Similar to CYP3A4, several substrates influenced the activity of CYP1A2. This is also in accordance with previous studies 62 and confirms the successful cell-free synthesis of this CYP isoform. CYP2B6 activity was influenced by efavirenz, a well-known CYP2B6 substrate 63 . In contrast to other substrates, the CYP substrate testosterone had an activity increasing effect on CYP2B6. This atypical kinetic characteristic of substrate activation by testosterone has already been observed earlier for CYP2B6, due to autoactivation of the enzyme 64 , 65 . Upon initial inspection, an anomaly in the assay appears to be present. However, as the values exhibited reproducibility, it was inferred that this discrepancy is attributed to an autoactivation of the enzyme upon substrate binding, resulting in increased Luc substrate turnover. | Conclusion
The high demand of active CYPs requires a straightforward method for the synthesis of members of this enzyme superfamily. Cell-free protein synthesis enables the synthesis of specific active CYPs using a timesaving procedure. By creating a vesicle containing protein production platform from modified CPR overexpressing CHO cells, the generation of mono-CYP microsomes for a huge variety of future applications becomes feasible. However, this synthesis methodology represents a technological innovation in the field of the production of membrane-attached enzymes. Consequently, there is still a huge potential to be addressed, especially regarding the optimization of the translocation process. So far, it was already possible to use cell-free synthesized CYPs for analytical set-ups. Extensive screening procedures regarding mutations, isoforms and genetic variants, but also detailed substrate and inducer/inhibitor screenings are now facilitated by using CFPS. These promising initial results can be a starting point for various fundamental and applied research projects. | Cytochromes P450 (CYPs) are a group of monooxygenases that can be found in almost all kinds of organisms. For CYPs to receive electrons from co-substrate NADPH, the activity of NADPH-Cytochrome-P450-oxidoreductase (CPR) is required as well. In humans, CYPs are an integral part of liver-based phase-1 biotransformation, which is essential for the metabolization of multiple xenobiotics and drugs. Consequently, CYPs are important players during drug development and therefore these enzymes are implemented in diverse screening applications. For these applications it is usually advantageous to use mono CYP microsomes containing only the CYP of interest. The generation of mono-CYP containing mammalian cells and vesicles is difficult since endogenous CYPs are present in many cell types that contain the necessary co-factors. By obtaining translationally active lysates from a modified CHO-CPR cell line, it is now possible to generate mono CYPs in a cell-free protein synthesis process in a straightforward manner. As a proof of principle, the synthesis of active human CYPs from three different CYP450 gene families (CYP1A2, CYP2B6 and CYP3A4), which are of outstanding interest in industry and academia was demonstrated. Luciferase based activity assays confirm the activity of the produced CYPs and enable the individual adaptation of the synthesis process for efficient cell-free enzyme production. Furthermore, they allow for substrate and inhibitor screenings not only for wild-type CYPs but also for mutants and further CYP isoforms and variants. As an example, the turnover of selected CYP substrates by cell-free synthesized CYPs was demonstrated via an indirect luciferase assay-based screening setup.
Subject terms
Open Access funding enabled and organized by Projekt DEAL. | Aim of the work
Cytochrome P450 enzymes are one of the best-studied classes of enzymes, but recombinant production is challenging due to their membrane localization and enzymatic coupling. The use of vesicle-based cell-free protein synthesis, which enables the fast and efficient production of various membrane proteins, can provide an alternative way of producing defined active human CYPs. The aim of this study is to outline a protein synthesis platform that enables the synthesis of all kinds of CYPs within only a few hours. For this purpose, modified CHO cell lysates containing the necessary CYP co-factors were generated and characterized. In these lysates, CYPs from different gene families were synthesized. CYP1A2, CYP2B6 and CYP3A4 are prominent representatives of the three most important human CYP families and are therefore utilized as a proof of concept in this study. Finally, cell-free synthesized CYPs are used to demonstrate the straightforward applicability of the system for screening procedures.
Supplementary Information
| Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-024-51781-6.
Acknowledgements
For the lysate preparation, the authors would like to thank D. Wenzel (Fraunhofer IZI-BB, Potsdam-Golm, Germany).
Author contributions
J.F.K. Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Supervision, Validation, Visualization, Writing-original draft; C.S. Resources, Methodology, Writing-review & editing; A.Z. Supervision, Writing-review & editing; D.A.W. Resources, Supervision; R.M.W. Data curation, Formal analysis, Investigation, Writing-review & editing; J.H.K. Writing-review & editing, Resources; S.K. Conceptualization, Data curation, Funding acquisition, Methodology, Project administration, Resources, Supervision, Writing-review & editing.
Funding
Open Access funding enabled and organized by Projekt DEAL. This research was funded by the Ministry of Science, Research and Culture (MWFK, Brandenburg, Germany), project PZ-Syn (project number F241-03-FhG/005/001).
Data availability
All data generated or analyzed during this study are included in this published article (and its Supplementary Information files).
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:59 | Sci Rep. 2024 Jan 13; 14:1271 | oa_package/81/35/PMC10787779.tar.gz |
PMC10787780 | 38218972 | Introduction
Dye-contaminated wastewater affects to be toxic to aquatic organisms because it has an aromatic structure difficult to degrade, and the colored particles may block the transmission of light into the water body. As a result, aquatic plants and algae are unable to photosynthesize. Furthermore, the lack of oxygen in water sources affects life in water and destroys the scenery which is offensive to the onlookers 1 . Many industries of dye, pigment, paint, paper, printing, cosmetics, and textile widely use dyes in their product manufacturing, especially direct dyes are popularly used for long-lasting cellulose and lignin dyeing 2 . Direct red 28 (DR28) dye is also popularly used for dyeing cotton in many industries, so wastewater with contaminated DR28 dyes is recommended to be treated before discharging for environmental safety.
The treatment methods of dyes are coagulation-flocculation, chemical oxidation, electrochemistry, ion exchange, ozonation, photochemistry, adsorption, and biological process 3 . However, adsorption is a favored method for adsorbing dyes because it is the effective method, easy operation, suitable cost, and offering several adsorbents 4 . In addition, the main criteria of good adsorbents are required as environmentally friendly adsorbent, easy access, cheap cost, and cost-effective use, so the agricultural waste is one option that corresponds to these requirements above. Many agricultural wastes have been used for removing several dyes shown in Table 1 . Many studies reported in Table 1 have applied the sugarcane bagasse to eliminate dyes of reactive blue 19, methyl red, and basic red 2, reactive blue 4 5 – 8 , so they can affirm the sugarcane bagasse's ability to adsorb several dyes. However, the development of sugarcane bagasse to deal with the specific pollutant targets with the high concentration strength of industrial wastewater also needs more investigation.
Many methods of acid treatment, alkaline treatment, and metal oxide modifications are used to increase the abilities of sugarcane bagasse materials for dye removals also illustrated in Table 1 . In previous studies, sugarcane bagasse beads modified with titanium dioxide (TiO 2 ), magnesium oxide (MgO), aluminum oxide (Al 2 O 3 ), and zinc oxide (ZnO) have been used for removing RB4 dye 8 , 9 ; however, no one used them to remove DR28 dye. As a result, their comparison results need to confirm the abilities of sugarcane bagasse beads modified with those metal oxides for removing several anionic dyes. Therefore, this current study attempts to investigate the abilities of sugarcane bagasse beads with or without metal oxide modifications for removing DR28 dye to understand how the addition of metal oxide with different types affects DR28 dye, and which one offers the highest DR28 dye removal.
In this study, sugarcane bagasse beads (SBB), sugarcane bagasse beads modified with titanium dioxide (SBBT), sugarcane bagasse beads modified with magnesium oxide (SBBM), sugarcane bagasse beads modified with aluminum oxide (SBBA), and sugarcane bagasse beads modified with zinc oxide (SBBZ) were synthesized for investigating their characterizations and DR28 dye removal efficiencies. Brunauer–Emmett–Teller (BET), Field emission scanning electron microscopy and focus ion beam (FESEM-FIB), Energy dispersive X-ray spectrometer (EDX), and Fourier transform infrared spectroscopy (FT-IR) were used for identifying their specific surface area, pore volumes, pore sizes, surface structures, chemical elements, and chemical functional groups. In addition, their points of zero charge (pH pzc ) were also investigated to recognize their surface charges. The affecting factors of dosage, contact time, temperature, pH, and concentration were examined by batch tests, and their adsorption isotherms and kinetics were also determined by nonlinear models of Langmuir, Freundlich, Temkin, Dubinin–Radushkevich, pseudo-first-order kinetic, pseudo-second-order kinetic, Elovich, and intra-particle diffusion for describing their adsorption patterns and mechanisms. The thermodynamic study was also investigated to understand the temperature effect on their DR28 dye removals. | Material and method
Raw material and preparation
Sugarcane bagasse was taken from the local market in Khon Kaen province, Thailand. Before use, it was washed with tap water to remove contaminations, and then it was dried in a hot air oven (Binder, FED 53, Germany) at 80 °C for 24 h. Then, it was ground, sieved in size of 125 μm, and kept in a desiccator called sugarcane bagasse powder (SBP) 8 .
Chemicals
All chemicals used in this study were analytical grades (AR) without purification. They were titanium dioxide (TiO 2 ) (Loba, India), magnesium oxide (MgO) (RCI Labscan, Thailand), aluminum oxide (Al 2 O 3 ) (Kemaus, New Zealand), zinc oxide (ZnO) (QRëC, New Zealand), sodium alginate (NaC 6 H 7 O 6 ) (Merck, Germany), calcium chloride dihydrate (CaCl 2 ·2H 2 O) (RCI Labscan, Thailand), direct red 28 (DR28) dye (C 32 H 22 N 6 Na 2 O 6 S 2 ) (Sigma-Aldrich, Germany), 0.1 M HCl (RCI Labscan, Thailand), and 0.1 M NaOH (RCI Labscan, Thailand). The pH adjustments used 0.5% nitric acid (HNO 3 ) (Merck, Germany) and 0.5% NaOH (RCI Labscan, Thailand).
Dye solution preparation
The dye solutions are prepared from the stock solution of direct red 28 (DR28) dye of 100 mg/L concentration.
Material synthesis
The material synthesis methods are mentioned from the study of Ngamsurach et al. 8 , Praipipat et al. 9 , and Praipipat et al. 18 , and the flow diagrams are illustrated in Fig. 1 . The details are described below:
The synthesis of sugarcane bagasse beads (SBB)
Firstly, 10 g of SBP were added to a 1000 mL beaker containing 400 mL of 2% NaC 6 H 7 O 6 , then they were heated by a hot plate (Ingenieurbüro CAT, M. Zipperer GmbH, M 6, Germany) at 60 °C with a stable stirring speed of 200 rpm until homogeneous mixed. Next, they were contained into a syringe with a needle (1.2 mm × 25 mm), and they were dropwise into 250 mL of 0.1 M CaCl 2 ·2H 2 O and soaked for 24 h for a bead setting. Then, they were filtrated, rinsed with DI water, and air-dried at room temperature for 12 h. Finally, they were kept in a desiccator before use called sugarcane bagasse beads (SBB).
The synthesis of sugarcane bagasse beads modified with titanium dioxide (SBBT) or magnesium oxide (SBBM) or aluminum oxide (SBBA) or zinc oxide (SBBZ)
Firstly, 10 g of SBP were added to a 250 mL Erlenmeyer flask containing 160 mL of 5% (w/v) TiO 2 or MgO or Al 2 O 3 or ZnO solution prepared by the deionized water, and they were homogeneously mixed by an orbital shaker (GFL, 3020, Germany) of 200 rpm for 3 h. Next, they were filtered, air-dried at room temperature for 12 h, and kept in a desiccator called sugarcane bagasse powder mixed with TiO 2 or MgO or Al 2 O 3 or ZnO (SBPT or SBPM or SBPA, or SBPZ). Then, SBPT or SBPM or SBPA, or SBPZ were added to a 1000 mL beaker containing 400 mL of 2% NaC 6 H 7 O 6 , then they were heated by a hot plate at 60 °C with a stable stirring speed of 200 rpm until homogeneous mixed. Next, they were contained into a syringe with a needle (1.2 mm × 25 mm), and they were dropwise into 250 mL of 0.1 M CaCl 2 ·2H 2 O and soaked for 24 h for a bead setting. Then, they were filtrated, rinsed with DI water, and air-dried at room temperature for 12 h. Finally, they were kept in a desiccator before use called sugarcane bagasse modified with titanium dioxide beads (SBBT), sugarcane bagasse modified with magnesium oxide beads (SBBM), sugarcane bagasse modified with aluminum oxide beads (SBBA), and sugarcane bagasse modified with zinc oxide beads (SBBZ).
Material characterizations
The material characterizations on the specific surface area, pore volumes, pore sizes, surface structures, chemical elements, and chemical functional groups of SBB, SBBT, SBBM, SBBA, and SBBZ were investigated by Brunauer–Emmett–Teller (BET), Field emission scanning electron microscopy and focus ion beam (FESEM-FIB) with Energy dispersive X-ray spectrometer (EDX) (FEI, Helios NanoLab G3 CX, USA), and Fourier transform infrared spectroscopy (FT-IR) (Bruker, TENSOR27, Hong Kong).
The point of zero charge (pH pzc )
The method of the points of zero charge of SBB, SBBT, SBBM, SBBA, and SBBZ for DR28 dye adsorptions is mentioned from the studies of Praipipat et al. 18 , 19 which was the pH drift method by preparing 0.1 M NaCl solutions with pH values from 2 to 12 by using 0.1 M HCl and 0.1 M NaOH. Then, 2 g/L of SBB or SBBT or SBBM or SBBA, or SBBZ were added to 50 mL of 0.1 M NaCl solution contained in 250 mL Erlenmeyer flask, and it was shaken at 150 rpm for 24 h at room temperature by an orbital shaker. Finally, the final pH of the sample was measured by a pH meter (Mettler Toledo, SevenGo with InLab 413/IP67, Switzerland) and calculated ∆pH (pH final –pH initial ) to determine the point of zero charge (pH pzc ).
Batch experiments
The affecting factors of dose (5–30 g/L), contact time (3–18 h), temperature (20–50 °C), pH (3–11), and concentration (30–90 mg/L) with the control condition of initial DR28 dye concentration of 50 mg/L, a sample volume of 100 mL, and a shaking speed of 150 rpm by using an incubator shaker (New Brunswick, Innova 42, USA) 8 , 9 , 20 on DR28 dye removal efficiencies of SBB, SBBT, SBBM, SBBA, and SBBZ were investigated through a series of batch experiments which referred from the previous study of Praipipat et al. 18 Their optimum conditions were chosen from the lowest dose or contact time or temperature or pH or concentration with obtaining the highest DR28 dye removal efficiencies 9 . UV–VIS Spectrophotometer (UH5300, Hitachi, Japan) with a wavelength of 497 nm was used for analyzing dye concentrations, and the triplicate experiments were investigated to verify the results and report the average value. Dye removal efficiency in the percentage and dye adsorption capacity is calculated following Eqs. ( 1 )–( 2 ): where C e is the dye concentration at equilibrium (mg/L), C 0 is the initial dye concentration (mg/L) , q e is the capacity of dye adsorption on adsorbent material at equilibrium (mg/g) , V is the sample volume (L), and m is the amount of adsorbent material (g).
Adsorption isotherms
The adsorption patterns of SBB, SBBT, SBBM, SBBA, and SBBZ were determined by using nonlinear Langmuir, Freundlich, Temkin, and Dubinin–Radushkevich models. Langmuir model is monolayer adsorption, and Freundlich model represents multilayer adsorption 21 , 22 . Temkin model refers to the heat of adsorption with decreasing from the increase of coverage adsorbent, and Dubinin–Radushkevich model is used to determine the adsorption mechanism between physisorption and chemisorption 23 , 24 . Their adsorption isotherms were calculated by Eqs. ( 3 )–( 6 ) 20 – 24 :
Langmuir isotherm:
Freundlich isotherm:
Temkin isotherm:
Dubinin–Radushkevich isotherm: where q e is the capacity of dye adsorption on adsorbent material at equilibrium (mg/g), q m is the maximum capacity of dye adsorption on adsorbent material (mg/g), C e is the equilibrium of dye concentration (mg/L), K L is Langmuir adsorption constant (L/mg), K F is Freundlich constant of adsorption capacity (mg/g)(L/mg) 1/n , and n is the constant depicting of the adsorption intensity. R is the universal gas constant (8.314 J/mol K), T is the absolute temperature (K), b T is the constant related to the heat of adsorption (J/mol), A T is the equilibrium binding constant corresponding to maximum binding energy (L/mg), K DR is the activity coefficient related to mean adsorption energy (mol 2 /J 2 ), and ε is the Polanyi potential (J/mol). Their graphs are plotted by q e versus C e .
For adsorption isotherm experiments, 25 g/L and 18 h of SBB, or 15 g/L and 18 h of SBBT or 20 g/L and 6 h of SBBM, or 15 g/L and 12 h of SBBA, or 25 g/L and 12 h of SBBZ have added to 250 mL Erlenmeyer flasks with variable DR28 dye concentrations from 30 to 90 mg/L. The control condition of SBB or SBBT or SBBM or SBBA or SBBZ was a sample volume of 100 mL, a shaking speed of 150 rpm, pH 3, and a temperature of 35 °C.
Adsorption kinetics
The adsorption rate and mechanism of SBB, SBBT, SBBM, SBBA, and SBBZ were determined by using nonlinear pseudo-first-order kinetic, pseudo-second-order kinetic, Elovich, and intra-particle diffusion models. The pseudo-first-order and pseudo-second-order kinetic models are the physisorption and chemisorption processes 25 , 26 . Elovich model is the chemical adsorption process with a heterogeneous surface, and the intra-particle diffusion model refers to the rate limiting in the adsorption process 27 , 28 . Their adsorption kinetics were calculated by Eqs. ( 7 )–(10) 25 – 28 :
Pseudo-first-order kinetic model:
Pseudo-second-order kinetic model:
Elovich model:
Intra-particle diffusion model: where q e is the capacity of dye adsorption on adsorbent material at equilibrium (mg/g) , q t is the capacity of dye adsorption on adsorbent material at the time ( t ) (mg/g), k 1 is a pseudo-first-order rate constant (min −1 ), and k 2 is a pseudo-second-order rate constant (g/mg min). α is the initial adsorption rate (mg/g min) and β is the extent of surface coverage (g/mg). k i is the intra-particle diffusion rate constant (mg/g min 0.5 ) and C i is the constant that gives an idea about the thickness of the boundary layer (mg/g) 19 , 29 . Their graphs are plotted by q t versus t .
For the kinetic experiments, 25 g/L of SBB or 15 g/L of SBBT or 20 g/L of SBBM or 15 g/L of SBBA, or 25 g/L of SBBZ were added to a 1000 mL beaker. The control condition of SBB or SBBT or SBBM or SBBA, or SBBZ was a sample volume of 1000 mL, DR28 dye concentrations of 50 mg/L, a shaking speed of 150 rpm, pH 3, and a contact time of 24 h 18 .
Thermodynamic study
The temperature effect on DR28 dye adsorption capacities of SBB, SBBT, SBBM, SBBA, and SBBZ were investigated through thermodynamic studies in a range of 293.15–323.15 K, and their results were explained by three thermodynamic parameters of Gibb free energy (∆ G °), standard enthalpy change (∆ H °), and standard entropy change (∆ S °). Equations ( 11 )–( 13 ) were used to calculate their parameters 18 . where R is the universal gas constant (8.314 J/mol K), T is the absolute temperature (K), and K c is the equilibrium constant (L/mg). The values of ∆ H ° and ∆ S ° were calculated from the slope and intercept of the linear graph between ln K c ( K c = q e / C e ) and 1/ T , and ∆ G ° is calculated from Eq. ( 13 ).
For the thermodynamic experiments, 25 g/L and 18 h of SBB, or 15 g/L and 18 h of SBBT or 20 g/L and 6 h of SBBM, or 15 g/L and 12 h of SBBA, or 25 g/L and 12 h of SBBZ were applied with temperatures of 293.15–323.15 K with the control condition of DR28 dye concentration of 50 mg/L, a sample volume of 100 mL, pH 3, and a shaking speed of 150 rpm 20 . | Result and discussion
BET
The specific surface area, pore volumes, and pore sizes of SBB, SBBT, SBBM, SBBA, and SBBZ are illustrated in Table 2 . Their specific surface area and pore volume could be arranged from high to low of SBBM > SBBT > SBBA > SBBZ > SBB, and SBBM demonstrated the highest surface area and pore volume among other materials. Since magnesium oxide (MgO), titanium dioxide (TiO 2 ), aluminum oxide (Al 2 O 3 ), and zinc oxide (ZnO) have a high specific surface area by themselves, the specific surface area of prepared materials by those metal oxides have higher specific surface area than raw material. Moreover, the previous studies reported the specific surface area of MgO, TiO 2 , Al 2 O 3 , and ZnO were 60, 50, 40, and 30 m 2 /g, and they could be arranged in order from high to low of MgO > TiO 2 > Al 2 O 3 > ZnO 30 , 31 . As a result, it could support why SBBM had a higher surface area than other materials. Therefore, metal oxides of TiO 2 , MgO, Al 2 O 3 , and ZnO increased the specific area and pore volumes of materials from the formations of those metal oxides with sugarcane bagasse supported more active sites for capturing DR28 dye adsorptions similar reported by previous studies used the same metal oxides 9 , 18 , 20 . Moreover, other metal oxides of zinc oxide, iron(III) oxide-hydroxide, and goethite have also been used in previous studies supported this study that the raw materials with adding metal oxides increased the surface area and pore volume 18 , 32 – 36 . Since their pore sizes were more than 2 nm, they were classified as mesoporous materials by the International Union of Pure and Applied Chemistry (IUPAC) classification 37 .
FESEM-FIB and EDX
For FESEM-FIB analysis, the surface morphologies at 1,500X magnification with 100 μm of SBB, SBBT, SBBM, SBBA, and SBBZ are demonstrated in Fig. 2 a–e. The surfaces of SBB, SBBM, SBBT, and SBBA were scaly sheet surfaces and structures with an irregular shape similar to other studies reported 8 , 9 , whereas SBBZ had a coarse surface similar found in a previous study 8 .
For EDX analysis, the chemical elements of SBB, SBBT, SBBM, SBBA, and SBBZ are illustrated in Table 3 , and their EDX mapping distributions are also demonstrated in Fig. 2 f–j. Five main chemical elements of oxygen (O), carbon (C), calcium (Ca), chloride (Cl), and sodium (Na) were observed in all materials, whereas titanium (Ti), magnesium (Mg), aluminum (Al), and zinc (Zn) only detected in SBBT, SBBM, SBBA, and SBBZ, respectively because of addition of those metal oxides. In addition, the observations of Na, Ca, and Ca in all materials might be from the chemicals of sodium alginate and calcium chloride used in bead formations.
FT-IR
The chemical functional groups of SBB, SBBT, SBBM, SBBA, and SBBZ are illustrated in Fig. 3 a–e which they observed five main chemical functional groups of O–H, C–H, C=O, C=C, and C–O–C similar found in previous studies 8 , 9 , 29 . For O–H, it was the stretching water molecule, hydroxide groups of alcohol, phenol, and carboxylic acids 9 , and they were found in a range of 3310–3700 cm −1 . For C–H, it referred to the bending of alkane (CH 2 ), alkene (CH 3 ), and aliphatic and aromatic groups of cellulose 38 observed in a range of 2896–2960 cm −1 . In addition, C–H also represented the stretching of CH 3 in a range of 1330–1430 cm −1 , and C–H was the bending of lignin and aromatic ring 39 in a range of 720–750 cm −1 . For C=O, it was the stretching of the carbonyl group, aldehyde, and ketone 39 illustrated in a range of 1720–1740 cm −1 . For C=C, it was the stretching of the aromatic ring in the lignin structure and the stretching of hemicellulose and cellulose 29 which were found in ranges of 1500–1610 cm −1 and 810–900 cm −1 , respectively. For C–O–C, it referred to the stretching of hemicellulose, cellulose, and sodium alginate 8 in a range of 1020–1090 cm −1 . Moreover, the functional groups of Ti–O–Ti, Mg–O, Al–O, and Zn–O were observed in SBBT, SBBM, SBBA, and SBBZ from the addition of titanium dioxide, magnesium oxide, aluminum oxide, and zinc oxide 18 which were found at 663.49, 655.77, 654.35, and 678.92 cm −1 , respectively.
The point of zero charge (pH pzc )
The surface charges of SBB, SBBT, SBBM, SBBA, and SBBZ were determined by the point of zero charge (pH pzc ) to expect which pH is preferred for DR28 dye adsorption of each material. Figure 4 is illustrated the pH pzc of SBB, SBBT, SBBM, SBBA, and SBBZ which were 6.57, 7.31, 10.11, 7.25, and 7.77, and SBBM illustrated the highest pH pzc among other materials similar found in a previous study 18 . Since the anionic dye should be adsorbed at a pH of solution (pH solution ) less than pH pzc because of the positively charged material surface, it can catch up DR28 dye molecule. On the other hand, DR28 dye adsorption is not favored at a pH solution higher than pH pzc because of the negatively charged material surface and the repulsion of the DR28 dye molecule. Therefore, DR28 dye adsorptions of each material should take place at a pH of solution less than its pH pzc (pH solution < pH pzc ) 18 , 40 .
Batch experiments
The effect of dosage
The effect of dosage from 5 to 30 g/L was designed to investigate how many grams of each material are needed for adsorbing DR28 dye at a concentration of 50 mg/L, a sample volume of 100 mL, a contact time of 12 h, a pH 7, a temperature of 30 °C, and a shaking speed of 150 rpm 9 to obtain the highest DR28 dye removal efficiency, and the results are shown in Fig. 5 a. DR28 dye removal efficiencies of SBB, SBBT, SBBM, SBBA, and SBBZ were increased with increasing material dosage from 5 to 30 g/L because of increasing of active sites for adsorbing DR28 dye similarly reported by other studies 41 , 42 . Furthermore, the highest DR28 dye removal efficiencies were found at 25 g/L (81.90%), 15 g/L (85.23%), 20 g/L (92.67%), 15 g/L (87.30%), and 25 g/L (83.73%) for SBB, SBBT, SBBM, SBBA, and SBBZ, respectively. Therefore, they were used as the optimum dosages for the effect of contact time.
The effect of contact time
The effect of contact time from 3 to 18 h was used to determine how much contact time of each material is enough for adsorbing DR28 dye at a concentration of 50 mg/L, a sample volume of 100 mL, a pH 7, a temperature of 30 °C, a shaking speed of 150 rpm 9 , and the optimum contact dosage to achieve the highest DR28 dye removal efficiency, and the results are shown in Fig. 5 b. DR28 dye removal efficiencies of SBB, SBBT, SBBM, SBBA, and SBBZ were increased with increasing contact time from 3 to 18 h until their saturated adsorptions with discovering constant contact time were the optimum contact time 18 . The highest DR28 dye removal efficiencies were found at 18 h (79.41%), 18 h (84.59%), 6 h (93.16%), 12 h (86.71%), and 12 h (82.94%) for SBB, SBBT, SBBM, SBBA, and SBBZ, respectively. Therefore, they were used as the optimum contact time for the effect of temperature.
The effect of temperature
The effect of temperature from 20 to 50 °C was examined how many temperatures of each material are good for adsorbing DR28 dye at a concentration of 50 mg/L, a sample volume of 100 mL, a pH 7, a shaking speed of 150 rpm 9 , and the optimum dosage and contact time to get the highest DR28 dye removal efficiency, and the results are shown in Fig. 5 c. DR28 dye removal efficiencies of SBB, SBBT, SBBM, SBBA, and SBBZ were increased with the increases of temperature from 20 to 35 °C, and then they a little decreased. The highest DR28 dye removal efficiencies were found at 35 °C in all materials with 80.43%, 85.02%, 94.33%, 87.33%, and 83.75% for SBB, SBBT, SBBM, SBBA, and SBBZ, respectively. Therefore, a temperature of 35 °C was the optimum temperature for the effect of pH.
The effect of pH
The effect of pH from 3 to 11 was used to examine the influence of pH on DR28 dye removal efficiencies of SBB, SBBT, SBBM, SBBA, and SBBZ to find the optimum pH for adsorb DR28 dye at a concentration of 50 mg/L, a sample volume of 100 mL, a shaking speed of 150 rpm 9 , and the optimum dosage, contact time, and temperature to get the highest DR28 dye removal efficiency, and the results are shown in Fig. 5 d. For pK a and pH of solution (pH solution ), if the pH solution is higher than pK a (pH solution > pK a ), the dye molecule is in an anionic form. On the opposite, if the pH solution is less than pK a (pH solution < pK a ), the dye molecule is in a cationic form. Since the pK a of DR28 dye is 4.1 43 , the DR28 dye molecule should adsorb at pH solution > pK a. From the results of the point of zero charges (pH pzc ), their DR28 dye adsorptions should occur at pH solution < pH pzc . As a result, the high DR28 dye adsorption of each material should be observed at pK a < pH solution < pH pzc . In Fig. 5 d, their DR28 dye adsorptions were highly adsorbed at pH 3–5, and the highest DR28 dye removal efficiency was found at pH 3 with 79.56%, 84.35%, 93.83%, 86.87%, and 82.58% for SBB, SBBT, SBBM, SBBA, and SBBZ, respectively which might support by the pK a of carboxyl group (–COOH) in materials which is 3–5 44 . In addition, these results also agreed with the prior studies that found the highest anionic dye removal efficiencies at pH 3 8 , 9 , 18 , 40 . Therefore, pH 3 was the optimum pH for the effect of concentration.
The effect of concentration
The effect of concentration from 30 to 90 mg/L observed how many concentrations of each material could adsorb DR28 dye at a sample volume of 100 mL a shaking speed of 150 rpm 9 , and the optimum dosage, contact time, temperature, and pH to get the highest DR28 dye removal efficiency, and the results are shown in Fig. 5 e. DR28 dye removal efficiencies of SBB, SBBT, SBBM, SBBA, and SBBZ were decreased with increasing concentration because the decrease of active sites for adsorbing DR28 dye similar to other studies 18 . Their DR28 dye removal efficiencies from 30 to 90 mg/L were 67.27–84.39%, 75.73–89.39%, 83.84–96.02%, 78.09–90.94%, and 70.47–87.50% for SBB, SBBT, SBBM, SBBA, and SBBZ, respectively, and their DR28 dye removal efficiencies at 50 mg/L were 81.51%, 85.44%, 94.27%, 88.31%, and 83.51% for SBB, SBBT, SBBM, SBBA, and SBBZ, respectively.
Finally, the optimum conditions in dosage, contact time, temperature, pH, and concentration of SBB, SBBT, SBBM, SBBA, and SBBZ were 25 g/L, 18 h, 35 °C, pH 3, 50 mg/L, 15 g/L, 18 h, 35 °C, pH 3, 50 mg/L, 20 g/L, 6 h, 35 °C, pH 3, 50 mg/L, 15 g/L, 12 h, 35 °C, pH 3, 50 mg/L, and 25 g/L, 12 h, 35 °C, pH 3, 50 mg/L, respectively. DR28 dye removal efficiencies could be arranged in order from high to low of SBBM > SBBA > SBBT > SBBZ > SBB, and SBBM had the highest DR28 dye removal efficiency with spending less material dosage and contact time than other materials similarly found by previous study with sugarcane bagasse fly ash beads modified with the same types of metal oxide with this study for DR28 dye adsorptions in aqueous solution 18 . Moreover, these results also corresponded to the results of BET analysis that SBBM had a higher surface area with smaller pore size than other materials, so it could adsorb DR 28 dye more than others. Therefore, the addition of metal oxides of magnesium oxide (MgO), titanium dioxide (TiO 2 ), aluminum oxide (Al 2 O 3 ), and zinc oxide (ZnO) increased material efficiencies for adsorbing DR28 dye, and SBBM was a high-potential material to further use for industrial wastewater treatment.
For the comparison with other anionic dye removals, the previous studies have used sugarcane bagasse or sugarcane bagasse fly ash beads with or without metal modifications of iron (III) oxide-hydroxide, ZnO, TiO 2 , MgO, and Al 2 O 3 for removing reactive blue 4 (RB4) and DR28 dyes 8 , 9 , 18 , and the results demonstrated sugarcane bagasse and sugarcane bagasse fly ash beads mixed MgO had the highest RB4 and DR28 dye removals than other materials. These results corresponded to this study that SBBM illustrated the highest DR28 dye removal, so it could confirm that sugarcane bagasse beads with or without metal modifications especially MgO could remove various anionic dyes of RB4 and DR28.
Adsorption isotherms
The adsorption patterns of SBB, SBBT, SBBM, SBBA, and SBBZ are described by various adsorption isotherms of Langmuir, Freundlich, Temkin, and Dubinin–Radushkevich models. Their graphs are plotted by q e versus C e . The results are shown in Fig. 6 a–e, and Table 4 displayed their equilibrium isotherm parameters.
The R 2 value is normally used for determining which adsorption isotherm better explains the adsorption pattern, and the higher R 2 is chosen. As a result, SBB corresponded to Langmuir model relating to the physical adsorption with a high R 2 of 0.997, whereas SBBT, SBBM, SBBA, and SBBZ corresponded to Freundlich model relating to the chemisorption with heterogeneous adsorption with high R 2 values of 0.998, 0.992, 0.997, and 0.994, respectively similar found in a previous study 18 .
Finally, the comparison of the maximum dye adsorption capacity ( q m ) of this study with other agriculture wastes for DR28 dye removals is demonstrated in Table 5 . The q m values of SBB, SBBT, SBBM, SBBA, and SBBZ were higher than cabbage (2.31 mg/g) and rice husk (1.28–2.04 mg/g) 15 , 45 , and the q m value of SBBM had higher than prior studies in Table 5 expect the studies of Rehman et al. 46 , Ibrahim and Sani 47 , and Masoudian et al. 48 .
Adsorption kinetics
The adsorption rates and mechanisms of SBB, SBBT, SBBM, SBBA, and SBBZ are determined by several adsorption kinetics of pseudo-first-order kinetic, pseudo-second-order kinetic, Elovich, and intra-particle diffusion models. Their graphs are plotted by q t versus t . The results are shown in Fig. 7 a–e, and Table 6 reported their equilibrium kinetic parameters.
Similar to adsorption isotherm, the R 2 value is normally used for determining which adsorption kinetic better describes the adsorption rate and mechanism, and the higher R 2 is preferred. Since the R 2 values of SBB, SBBT, SBBM, SBBA, and SBBZ in a pseudo-second-order kinetic model demonstrated the highest values of 0.997, 0.997, 0.999, 0.999, and 0.994, respectively, their adsorption rates and mechanisms were well described by chemisorption with the heterogeneous process agreed with a previous study reported 18 . In addition, the kinetic parameter of q e is used for comparing their DR28 dye adsorption capacities. The q e of SBBM was higher than other materials, so it could adsorb DR28 dye more than other materials agreed with the batch experiment results. Furthermore, the equilibrium DR28 dye adsorption capacities of SBB, SBBT, SBBM, SBBA, and SBBZ demonstrated in Fig. 7 f which reached the equilibrium within 60 min indicated their fast kinetic reaction rates.
Thermodynamic study
The results of thermodynamic studies in a range of 293.15–323.15 K of SBB, SBBT, SBBM, SBBA, and SBBZ on DR28 dye removals are demonstrated in Table 7 and Fig. 8 a–e. Their ∆ G ° had negative values in all temperatures which meant they were a favorable adsorption process of a spontaneous nature. For ∆ H °, all materials had positive values which meant their DR28 dye adsorption processes were endothermic 18 , and their ∆ S ° had positive values which meant the randomness during the adsorption process was increased 51 . Therefore, the increasing temperature was favorable for DR28 dye adsorptions onto all materials. | Result and discussion
BET
The specific surface area, pore volumes, and pore sizes of SBB, SBBT, SBBM, SBBA, and SBBZ are illustrated in Table 2 . Their specific surface area and pore volume could be arranged from high to low of SBBM > SBBT > SBBA > SBBZ > SBB, and SBBM demonstrated the highest surface area and pore volume among other materials. Since magnesium oxide (MgO), titanium dioxide (TiO 2 ), aluminum oxide (Al 2 O 3 ), and zinc oxide (ZnO) have a high specific surface area by themselves, the specific surface area of prepared materials by those metal oxides have higher specific surface area than raw material. Moreover, the previous studies reported the specific surface area of MgO, TiO 2 , Al 2 O 3 , and ZnO were 60, 50, 40, and 30 m 2 /g, and they could be arranged in order from high to low of MgO > TiO 2 > Al 2 O 3 > ZnO 30 , 31 . As a result, it could support why SBBM had a higher surface area than other materials. Therefore, metal oxides of TiO 2 , MgO, Al 2 O 3 , and ZnO increased the specific area and pore volumes of materials from the formations of those metal oxides with sugarcane bagasse supported more active sites for capturing DR28 dye adsorptions similar reported by previous studies used the same metal oxides 9 , 18 , 20 . Moreover, other metal oxides of zinc oxide, iron(III) oxide-hydroxide, and goethite have also been used in previous studies supported this study that the raw materials with adding metal oxides increased the surface area and pore volume 18 , 32 – 36 . Since their pore sizes were more than 2 nm, they were classified as mesoporous materials by the International Union of Pure and Applied Chemistry (IUPAC) classification 37 .
FESEM-FIB and EDX
For FESEM-FIB analysis, the surface morphologies at 1,500X magnification with 100 μm of SBB, SBBT, SBBM, SBBA, and SBBZ are demonstrated in Fig. 2 a–e. The surfaces of SBB, SBBM, SBBT, and SBBA were scaly sheet surfaces and structures with an irregular shape similar to other studies reported 8 , 9 , whereas SBBZ had a coarse surface similar found in a previous study 8 .
For EDX analysis, the chemical elements of SBB, SBBT, SBBM, SBBA, and SBBZ are illustrated in Table 3 , and their EDX mapping distributions are also demonstrated in Fig. 2 f–j. Five main chemical elements of oxygen (O), carbon (C), calcium (Ca), chloride (Cl), and sodium (Na) were observed in all materials, whereas titanium (Ti), magnesium (Mg), aluminum (Al), and zinc (Zn) only detected in SBBT, SBBM, SBBA, and SBBZ, respectively because of addition of those metal oxides. In addition, the observations of Na, Ca, and Ca in all materials might be from the chemicals of sodium alginate and calcium chloride used in bead formations.
FT-IR
The chemical functional groups of SBB, SBBT, SBBM, SBBA, and SBBZ are illustrated in Fig. 3 a–e which they observed five main chemical functional groups of O–H, C–H, C=O, C=C, and C–O–C similar found in previous studies 8 , 9 , 29 . For O–H, it was the stretching water molecule, hydroxide groups of alcohol, phenol, and carboxylic acids 9 , and they were found in a range of 3310–3700 cm −1 . For C–H, it referred to the bending of alkane (CH 2 ), alkene (CH 3 ), and aliphatic and aromatic groups of cellulose 38 observed in a range of 2896–2960 cm −1 . In addition, C–H also represented the stretching of CH 3 in a range of 1330–1430 cm −1 , and C–H was the bending of lignin and aromatic ring 39 in a range of 720–750 cm −1 . For C=O, it was the stretching of the carbonyl group, aldehyde, and ketone 39 illustrated in a range of 1720–1740 cm −1 . For C=C, it was the stretching of the aromatic ring in the lignin structure and the stretching of hemicellulose and cellulose 29 which were found in ranges of 1500–1610 cm −1 and 810–900 cm −1 , respectively. For C–O–C, it referred to the stretching of hemicellulose, cellulose, and sodium alginate 8 in a range of 1020–1090 cm −1 . Moreover, the functional groups of Ti–O–Ti, Mg–O, Al–O, and Zn–O were observed in SBBT, SBBM, SBBA, and SBBZ from the addition of titanium dioxide, magnesium oxide, aluminum oxide, and zinc oxide 18 which were found at 663.49, 655.77, 654.35, and 678.92 cm −1 , respectively.
The point of zero charge (pH pzc )
The surface charges of SBB, SBBT, SBBM, SBBA, and SBBZ were determined by the point of zero charge (pH pzc ) to expect which pH is preferred for DR28 dye adsorption of each material. Figure 4 is illustrated the pH pzc of SBB, SBBT, SBBM, SBBA, and SBBZ which were 6.57, 7.31, 10.11, 7.25, and 7.77, and SBBM illustrated the highest pH pzc among other materials similar found in a previous study 18 . Since the anionic dye should be adsorbed at a pH of solution (pH solution ) less than pH pzc because of the positively charged material surface, it can catch up DR28 dye molecule. On the other hand, DR28 dye adsorption is not favored at a pH solution higher than pH pzc because of the negatively charged material surface and the repulsion of the DR28 dye molecule. Therefore, DR28 dye adsorptions of each material should take place at a pH of solution less than its pH pzc (pH solution < pH pzc ) 18 , 40 .
Batch experiments
The effect of dosage
The effect of dosage from 5 to 30 g/L was designed to investigate how many grams of each material are needed for adsorbing DR28 dye at a concentration of 50 mg/L, a sample volume of 100 mL, a contact time of 12 h, a pH 7, a temperature of 30 °C, and a shaking speed of 150 rpm 9 to obtain the highest DR28 dye removal efficiency, and the results are shown in Fig. 5 a. DR28 dye removal efficiencies of SBB, SBBT, SBBM, SBBA, and SBBZ were increased with increasing material dosage from 5 to 30 g/L because of increasing of active sites for adsorbing DR28 dye similarly reported by other studies 41 , 42 . Furthermore, the highest DR28 dye removal efficiencies were found at 25 g/L (81.90%), 15 g/L (85.23%), 20 g/L (92.67%), 15 g/L (87.30%), and 25 g/L (83.73%) for SBB, SBBT, SBBM, SBBA, and SBBZ, respectively. Therefore, they were used as the optimum dosages for the effect of contact time.
The effect of contact time
The effect of contact time from 3 to 18 h was used to determine how much contact time of each material is enough for adsorbing DR28 dye at a concentration of 50 mg/L, a sample volume of 100 mL, a pH 7, a temperature of 30 °C, a shaking speed of 150 rpm 9 , and the optimum contact dosage to achieve the highest DR28 dye removal efficiency, and the results are shown in Fig. 5 b. DR28 dye removal efficiencies of SBB, SBBT, SBBM, SBBA, and SBBZ were increased with increasing contact time from 3 to 18 h until their saturated adsorptions with discovering constant contact time were the optimum contact time 18 . The highest DR28 dye removal efficiencies were found at 18 h (79.41%), 18 h (84.59%), 6 h (93.16%), 12 h (86.71%), and 12 h (82.94%) for SBB, SBBT, SBBM, SBBA, and SBBZ, respectively. Therefore, they were used as the optimum contact time for the effect of temperature.
The effect of temperature
The effect of temperature from 20 to 50 °C was examined how many temperatures of each material are good for adsorbing DR28 dye at a concentration of 50 mg/L, a sample volume of 100 mL, a pH 7, a shaking speed of 150 rpm 9 , and the optimum dosage and contact time to get the highest DR28 dye removal efficiency, and the results are shown in Fig. 5 c. DR28 dye removal efficiencies of SBB, SBBT, SBBM, SBBA, and SBBZ were increased with the increases of temperature from 20 to 35 °C, and then they a little decreased. The highest DR28 dye removal efficiencies were found at 35 °C in all materials with 80.43%, 85.02%, 94.33%, 87.33%, and 83.75% for SBB, SBBT, SBBM, SBBA, and SBBZ, respectively. Therefore, a temperature of 35 °C was the optimum temperature for the effect of pH.
The effect of pH
The effect of pH from 3 to 11 was used to examine the influence of pH on DR28 dye removal efficiencies of SBB, SBBT, SBBM, SBBA, and SBBZ to find the optimum pH for adsorb DR28 dye at a concentration of 50 mg/L, a sample volume of 100 mL, a shaking speed of 150 rpm 9 , and the optimum dosage, contact time, and temperature to get the highest DR28 dye removal efficiency, and the results are shown in Fig. 5 d. For pK a and pH of solution (pH solution ), if the pH solution is higher than pK a (pH solution > pK a ), the dye molecule is in an anionic form. On the opposite, if the pH solution is less than pK a (pH solution < pK a ), the dye molecule is in a cationic form. Since the pK a of DR28 dye is 4.1 43 , the DR28 dye molecule should adsorb at pH solution > pK a. From the results of the point of zero charges (pH pzc ), their DR28 dye adsorptions should occur at pH solution < pH pzc . As a result, the high DR28 dye adsorption of each material should be observed at pK a < pH solution < pH pzc . In Fig. 5 d, their DR28 dye adsorptions were highly adsorbed at pH 3–5, and the highest DR28 dye removal efficiency was found at pH 3 with 79.56%, 84.35%, 93.83%, 86.87%, and 82.58% for SBB, SBBT, SBBM, SBBA, and SBBZ, respectively which might support by the pK a of carboxyl group (–COOH) in materials which is 3–5 44 . In addition, these results also agreed with the prior studies that found the highest anionic dye removal efficiencies at pH 3 8 , 9 , 18 , 40 . Therefore, pH 3 was the optimum pH for the effect of concentration.
The effect of concentration
The effect of concentration from 30 to 90 mg/L observed how many concentrations of each material could adsorb DR28 dye at a sample volume of 100 mL a shaking speed of 150 rpm 9 , and the optimum dosage, contact time, temperature, and pH to get the highest DR28 dye removal efficiency, and the results are shown in Fig. 5 e. DR28 dye removal efficiencies of SBB, SBBT, SBBM, SBBA, and SBBZ were decreased with increasing concentration because the decrease of active sites for adsorbing DR28 dye similar to other studies 18 . Their DR28 dye removal efficiencies from 30 to 90 mg/L were 67.27–84.39%, 75.73–89.39%, 83.84–96.02%, 78.09–90.94%, and 70.47–87.50% for SBB, SBBT, SBBM, SBBA, and SBBZ, respectively, and their DR28 dye removal efficiencies at 50 mg/L were 81.51%, 85.44%, 94.27%, 88.31%, and 83.51% for SBB, SBBT, SBBM, SBBA, and SBBZ, respectively.
Finally, the optimum conditions in dosage, contact time, temperature, pH, and concentration of SBB, SBBT, SBBM, SBBA, and SBBZ were 25 g/L, 18 h, 35 °C, pH 3, 50 mg/L, 15 g/L, 18 h, 35 °C, pH 3, 50 mg/L, 20 g/L, 6 h, 35 °C, pH 3, 50 mg/L, 15 g/L, 12 h, 35 °C, pH 3, 50 mg/L, and 25 g/L, 12 h, 35 °C, pH 3, 50 mg/L, respectively. DR28 dye removal efficiencies could be arranged in order from high to low of SBBM > SBBA > SBBT > SBBZ > SBB, and SBBM had the highest DR28 dye removal efficiency with spending less material dosage and contact time than other materials similarly found by previous study with sugarcane bagasse fly ash beads modified with the same types of metal oxide with this study for DR28 dye adsorptions in aqueous solution 18 . Moreover, these results also corresponded to the results of BET analysis that SBBM had a higher surface area with smaller pore size than other materials, so it could adsorb DR 28 dye more than others. Therefore, the addition of metal oxides of magnesium oxide (MgO), titanium dioxide (TiO 2 ), aluminum oxide (Al 2 O 3 ), and zinc oxide (ZnO) increased material efficiencies for adsorbing DR28 dye, and SBBM was a high-potential material to further use for industrial wastewater treatment.
For the comparison with other anionic dye removals, the previous studies have used sugarcane bagasse or sugarcane bagasse fly ash beads with or without metal modifications of iron (III) oxide-hydroxide, ZnO, TiO 2 , MgO, and Al 2 O 3 for removing reactive blue 4 (RB4) and DR28 dyes 8 , 9 , 18 , and the results demonstrated sugarcane bagasse and sugarcane bagasse fly ash beads mixed MgO had the highest RB4 and DR28 dye removals than other materials. These results corresponded to this study that SBBM illustrated the highest DR28 dye removal, so it could confirm that sugarcane bagasse beads with or without metal modifications especially MgO could remove various anionic dyes of RB4 and DR28.
Adsorption isotherms
The adsorption patterns of SBB, SBBT, SBBM, SBBA, and SBBZ are described by various adsorption isotherms of Langmuir, Freundlich, Temkin, and Dubinin–Radushkevich models. Their graphs are plotted by q e versus C e . The results are shown in Fig. 6 a–e, and Table 4 displayed their equilibrium isotherm parameters.
The R 2 value is normally used for determining which adsorption isotherm better explains the adsorption pattern, and the higher R 2 is chosen. As a result, SBB corresponded to Langmuir model relating to the physical adsorption with a high R 2 of 0.997, whereas SBBT, SBBM, SBBA, and SBBZ corresponded to Freundlich model relating to the chemisorption with heterogeneous adsorption with high R 2 values of 0.998, 0.992, 0.997, and 0.994, respectively similar found in a previous study 18 .
Finally, the comparison of the maximum dye adsorption capacity ( q m ) of this study with other agriculture wastes for DR28 dye removals is demonstrated in Table 5 . The q m values of SBB, SBBT, SBBM, SBBA, and SBBZ were higher than cabbage (2.31 mg/g) and rice husk (1.28–2.04 mg/g) 15 , 45 , and the q m value of SBBM had higher than prior studies in Table 5 expect the studies of Rehman et al. 46 , Ibrahim and Sani 47 , and Masoudian et al. 48 .
Adsorption kinetics
The adsorption rates and mechanisms of SBB, SBBT, SBBM, SBBA, and SBBZ are determined by several adsorption kinetics of pseudo-first-order kinetic, pseudo-second-order kinetic, Elovich, and intra-particle diffusion models. Their graphs are plotted by q t versus t . The results are shown in Fig. 7 a–e, and Table 6 reported their equilibrium kinetic parameters.
Similar to adsorption isotherm, the R 2 value is normally used for determining which adsorption kinetic better describes the adsorption rate and mechanism, and the higher R 2 is preferred. Since the R 2 values of SBB, SBBT, SBBM, SBBA, and SBBZ in a pseudo-second-order kinetic model demonstrated the highest values of 0.997, 0.997, 0.999, 0.999, and 0.994, respectively, their adsorption rates and mechanisms were well described by chemisorption with the heterogeneous process agreed with a previous study reported 18 . In addition, the kinetic parameter of q e is used for comparing their DR28 dye adsorption capacities. The q e of SBBM was higher than other materials, so it could adsorb DR28 dye more than other materials agreed with the batch experiment results. Furthermore, the equilibrium DR28 dye adsorption capacities of SBB, SBBT, SBBM, SBBA, and SBBZ demonstrated in Fig. 7 f which reached the equilibrium within 60 min indicated their fast kinetic reaction rates.
Thermodynamic study
The results of thermodynamic studies in a range of 293.15–323.15 K of SBB, SBBT, SBBM, SBBA, and SBBZ on DR28 dye removals are demonstrated in Table 7 and Fig. 8 a–e. Their ∆ G ° had negative values in all temperatures which meant they were a favorable adsorption process of a spontaneous nature. For ∆ H °, all materials had positive values which meant their DR28 dye adsorption processes were endothermic 18 , and their ∆ S ° had positive values which meant the randomness during the adsorption process was increased 51 . Therefore, the increasing temperature was favorable for DR28 dye adsorptions onto all materials. | Conclusion
Five adsorbent materials of sugarcane bagasse beads (SBB), sugarcane bagasse modified with titanium dioxide beads (SBBT), sugarcane bagasse modified with magnesium oxide beads (SBBM), sugarcane bagasse modified with aluminum oxide beads (SBBA), and sugarcane bagasse modified with zinc oxide beads (SBBZ) were synthesized from sugarcane bagasse and various metal oxides for investigating their DR28 dye removal efficiencies. SBBM had the highest specific surface area and pore volume, whereas its pore size was the smallest among other materials. The surfaces of SBB, SBBM, SBBT, and SBBA were scaly sheet surfaces and structures with an irregular shape, whereas SBBZ was a coarse surface. Five main chemical elements of oxygen (O), carbon (C), calcium (Ca), chloride (Cl), and sodium (Na) were observed in all materials, whereas titanium (Ti), magnesium (Mg), aluminum (Al), and zinc (Zn) only detected in SBBT, SBBM, SBBA, and SBBZ, respectively. Five main chemical functional groups of O–H, C–H, C=O, C=C, and C–O–C were found in all materials, and Ti–O–Ti, Mg–O, Al–O, and Zn–O were observed in SBBT, SBBM, SBBA, and SBBZ. The points of zero charge (pH pzc ) of SBB, SBBT, SBBM, SBBA, and SBBZ were 6.57, 7.31, 10.11, 7.25, and 7.77, respectively. All materials could adsorb DR28 dye at a concentration of 50 mg/L by more than 81%, and SBBM illustrated the highest DR28 dye removal efficiency of 94.27%. For adsorption isotherm, Langmuir model was a suitable model for SBB corresponding to physical adsorption, whereas Freundlich model was an appropriate model to explain the adsorption pattern of SBBT, SBBM, SBBA, and SBBZ relating to physicochemical adsorption. For adsorption kinetic, a pseudo-second-order kinetic model was the best-fit model for all materials well explained by the chemisorption mechanism. Since the ∆ G ° of all materials had negative values, they were a favorable adsorption process of a spontaneous nature. While their ∆ H ° had positive values which meant they were an endothermic process. For ∆ S °, they had positive values which meant the randomness during the adsorption process was increased. Therefore, all materials were potential materials for adsorbing DR28 dye, especially SBBM.
For future works, the real wastewater might be applied to confirm their abilities for DR28 dye adsorptions. In addition, other anionic dyes might be investigated for possible adsorption by SBB, SBBT, SBBM, SBBA, and SBBZ. Moreover, the continuous flow study should study for the possible application in the industrial wastewater system. Furthermore, the leaching of metal oxides from SBBT, SBBM, SBBA, and SBBZ after the adsorption process might be suggested to investigate and confirm their no contaminations in treated wastewater. | The direct red 28 (DR28) dye contamination in wastewater blocks the transmission of light into the water body resulting in the inability to photosynthesize by aquatic life. In addition, it is difficult to break down and persist in the environment, and it is also harmful to aquatic life and water quality because of its aromatic structure. Thus, wastewater contaminated with dyes is required to treat before releasing into the water body. Sugarcane bagasse beads (SBB), sugarcane bagasse modified with titanium dioxide beads (SBBT), sugarcane bagasse modified with magnesium oxide beads (SBBM), sugarcane bagasse modified with aluminum oxide beads (SBBA), and sugarcane bagasse modified with zinc oxide beads (SBBZ) for DR28 dye removal in aqueous solution, and they were characterized with several techniques of BET, FESEM-FIB, EDX, FT-IR, and the point of zero charges (pH pzc ). Their DR28 dye removal efficiencies were examined through batch tests, adsorption isotherms, and kinetics. SBBM had the highest specific surface area and pore volume, whereas its pore size was the smallest among other materials. The surfaces of SBB, SBBM, SBBT, and SBBA were scaly sheet surfaces with an irregular shape, whereas SBBZ was a coarse surface. Oxygen, carbon, calcium, chloride, sodium, O–H, C–H, C=O, C=C, and C–O–C were found in all materials. The pH pzc of SBB, SBBT, SBBM, SBBA, and SBBZ were 6.57, 7.31, 10.11, 7.25, and 7.77. All materials could adsorb DR28 dye at 50 mg/L by more than 81%, and SBBM had the highest DR28 dye removal efficiency of 94.27%. Langmuir model was an appropriate model for SBB, whereas Freundlich model was a suitable model for other materials. A pseudo-second-order kinetic model well described their adsorption mechanisms. Their adsorptions of the DR28 dye were endothermic and spontaneous. Therefore, they were potential materials for adsorbing DR28 dye, especially SBBM.
Subject terms | The possible mechanisms for DR28 dye adsorptions
The possible mechanisms for DR28 dye adsorptions of SBB, SBBT, SBBM, SBBA, and SBBZ are demonstrated in Fig. 9 which modified the idea from the study of Ngamsurach et al. 8 and Praipipat et al. 9 , 18 . Their main chemical functional groups of O–H, C–H, C=O, C=C, and C–O–C , and the complex molecules of Ti–O–Ti, Mg–O, Al–O–Al, and Zn–O connected with their hydroxyl group (O–H) played a main role for DR28 dye adsorptions. The possible mechanisms of electrostatic attraction, hydrogen bonding interaction, and n–π bonding interaction are used for explaining DR28 dye adsorptions by SBB, SBBT, SBBM, SBBA, and SBBZ demonstrated in Fig. 9 . | Author contributions
P.P.: Supervision, Project administration, Conceptualization, Funding acquisition, Investigation, Methodology, Validation, Visualization, Writing—Original Draft, Writing-Review and Editing. P.N.: Visualization, Writing—Original Draft. N.L.: Investigation. C.K.: Investigation. P.B.: Investigation. W.N.: Investigation.
Funding
The authors are grateful for the financial support received from The Office of the Higher Education Commission and The Thailand Research Fund grant (MRG6080114), Coordinating Center for Thai Government Science and Technology Scholarship Students (CSTS) and National Science and Technology Development Agency (NSTDA) Fund grant (SCHNR2016-122), and Research and Technology Transfer Affairs of Khon Kaen University.
Data availability
The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:59 | Sci Rep. 2024 Jan 13; 14:1278 | oa_package/f5/a4/PMC10787780.tar.gz |
PMC10787781 | 38218964 | Introduction
Nearly half of China’s coal resource reserves and output are attributed to thick coal seams 1 – 3 . The fully mechanized longwall top coal caving (LTCC) mining technology is one of the main technologies used for mining thick seams 4 – 7 . Improper timing of the caving-opening closure in the caving process of the LTCC mining face can result in excessive or insufficient caving, leading to resource wastage or compromising coal quality 8 , 9 . Reasonable time for closing the caving-opening depends on the mixing degree of coal gangue 10 – 12 . Therefore, accurate, real-time and effective technology for identifying coal gangue mixing degree is an effective measure to realize automation and intelligent mining in coal mines, as well as promoting the intelligent industrialization of coal industry 13 – 15 . It can effectively reduce labor cost, enhance the safety of coal mining operations, reduce equipment maintenance expenses, and significantly improve the coal mining extraction rate, ultimately leading to higher productivity and efficiency 7 , 16 , 17 .
In recent years, many experts and scholars have conducted extensive scientific research in the field of coal and gangue detection 18 . Qingjun Song et al. proposed various methods to identify the vibration sound signal emitted by hydraulic support tail beam in the process of coal and gangue collapsed, aiming to realize detection of coal and gangue 16 , 19 . Bingxiang Huang et al. proposed a coal gangue detection method utilizing near-infrared spectroscopy 20 . Jiachen Wang et al. employed the top coal tracker to monitor the movement of top coal and achieve automated coal drawing in combination with coal gangue image recognition. Chuangyou Liu et al. proposed an coal gangue identification method using active microwave irradiation infrared detection. Zengcai Wang et al. proposed γ Detection of coal gangue mixing rate in top coal caving mining by radiographic method 12 , 17 , 21 , 22 . Liansheng Li et al. put forward a method for identifying coal and gangue based on density difference 23 , and Feng Xing proposed a non-contact microwave detection technology to detect the mixing degree of coal and gangue 24 . Jingjing Deng et al. employed terahertz technology to generate images of coal gangue mixture for the purpose of coal gangue detection 25 – 28 . Yuming Huo optimized the parameters of intelligent coal drawing process by establishing a predictive model of the periodic coal drawing time 29 , 30 .
The research above on the recognition of coal and gangue in LTCC mining, primarily adopt the principles of image grayscale, sound spectrum, vibration spectrum, natural γ method, and various composite monitoring methods. However, it is evident from the research objects and results that this study is still in the preliminary stage, primarily due to the complex structural characteristics of extremely thick coal seams in China, which often contain multiple layers of dirt bands 31 , 32 . The presence of dirt bands results in a mixed flow containing top coal, dirt bands and roof rocks flowing out of the top-coal caving-opening during the top coal caving process of the working face 33 .
That is, the coal gangue detection method in LTCC mining should not only address the detection of coal gangue mixing ratio in coal seams with a simple structure, but also accommodate complex structure coal seams 32 , 34 . In order to achieve the objective of real-time and accurate recognition of the mixing degree between coal and gangue in LTCC mining, this paper proposes an accurate recognition method based on natural γ ray, which utilizes the radiation difference characteristics of coal and rock natural γ ray. A low radiation level radioactive measurement method is employed to determine the instantaneous mixing ratio of coal and gangue mixture during the top coal caving process, thereby laying the foundation for realizing the intellectualization of LTCC mining. | Methods
Distribution characteristics of natural radionuclides in thick coal seams
Natural radionuclide
Natural radionuclides are formed during interstellar processes, such as the Big Bang, and continue to exist. They were transported to Earth during its formation. Currently, there are three natural radioactive series (uranium, actinium and thorium) and some non-series radionuclides in nature, however, only the former three can significantly impact radiometric measurement, as illustrated in Table 1 .
Deposition characteristics of natural radionuclides in coal beds
Natural radionuclides are present in various types of rocks, including coal-bearing strata 35 – 37 . In LTCC mining, the natural ray coal gangue identification technology heavily relies on the immediate roof of the coal seam. As such, the roof deposition characteristics of thick and extra-thick coal seams were studied, and the characteristics of their natural radiation intensity were analyzed.
Factors influencing the content and distribution of radionuclides in sedimentary rocks comprise the sediment source, composition and structure of the rocks, sedimentary conditions and sedimentary environment, radionuclide content of parent rock, duration of radionuclide presence, sediment grain size, and distance from the original location. Thus, the abundance of radionuclides in rocks is influenced by factors related to their formation mode, location and temporal aspects. For natural rocks, the distribution of radionuclides is as follows: Rocks or minerals of a similar nature exhibit comparable levels of radionuclide abundances. There are significant variations in the abundance of radionuclides among different rocks or minerals.
These aforementioned laws possess statistical characteristics and exist objectively, forming the basis for the coal gangue identification technology by natural gamma-ray.
To sum up, the coal mine roof exhibits varying radiation characteristics due to the diverse composition of sedimentary rocks, resulting in significant variations in the levels of uranium, thorium and potassium.
The radionuclides content in the roof primarily correlates with the grain size of the sediment, the amount of organic substances in the sedimentary environment, the sedimentary environment and conditions, the sedimentary time and other factors. Consequently, the following general rules apply: The content of radionuclides in rocks of the same type is similar. The content of radionuclides in different rocks varies greatly. In the coal bearing rock series, the lowest radioactive intensity is coal, while the radioactivity of conglomerate, coarse sandstone, medium sandstone, fine sandstone, siltstone, sandy mudstone, shale and mudstone gradually increases. The smaller the particle size of diagenetic material, the greater the mud content and the stronger the radiation. Inland roof rock exhibit lesser radiation compared to offshore roof rock. The presence of asphaltene mudstone, phosphorite and organic matter in the offshore sedimentary rocks contributes to the effective absorption of radionuclides during sedimentation, resulting in a generally higher radiation level compared to inland type rocks. The shorter coal forming time generally means stronger roof radiation. Thick coal seams are mostly lignite with low metamorphic degree, and their roof formation time is less than bituminous coal and anthracite, so their roof radioactivity is relatively large. Therefore, for similar roof rocks, the shorter formation time means the greater radiation intensity, which is beneficial to the application of natural ray coal gangue detection technology. Sedimentary rocks containing diagenetic minerals such as potash exhibit high radiation content. In order to identify the source of natural radiation accurately, it is necessary to analyze the composition of diagenetic minerals during measuring the radiation of coal mine roof.
As depicted in Fig. 1 that the radioactivity content of coal is the lowest. If the shielding effect of loose coal is considered, the radioactivity of coal can be ignored. The radioactivity of potassium salt is the highest. The radioactivity of common roof rocks in coal mines such as sandstone is more than 5 times that of coal. It is with a large different, therefore, it is feasible to obtain the content of gangue in the mixture by measuring the radiation intensity in the mixture of coal and gangue.
Characteristics of roof rock property of LTCC face in extra thick coal seam in China
According to the statistics of China National Knowledge Infrastructure Net documents, the immediate roof lithology of 94 LTCC faces in China is shown in Fig. 2 .
As depicted in Fig. 2 , among the 90 working faces surveyed, mudstone is the most prevalent roof rock type, representing 58.5% of the total statistical data, followed by sandstone at 30.9%, while conglomerate and limestone hav substantially lower occurrences. Among the sandstones, fine grained siltstone represented the primary type of sanedstone, accounting for 51.7% of the sandstone roof.
According to Fig. 1 , about 86% thick coal seams in China have significant differences in immediate roof radiation intensity from coal seams, so natural γ X-ray technology has wide applicability in coal gangue identification.
Radiation characteristics of coal and rock strata in typical thick coal seam mining areas in China
It can be seen from the above analysis that the radionuclide content of sedimentary rocks is related to the content of clay minerals, formation time, environment of sedimentary area and other factors. Therefore, representative typical mining areas such as Dongsheng, Datong, Yanzhou, Shuozhou and Longkou are selected to analyze and study the radiation characteristics of coal and roof rock in thick and extra thick coal seams.
As depicted in Fig. 3 , 1 there are radioactive elements in coal and rock, and the radiation intensity of coal samples is generally small, even less than their own shielding capacity. Therefore, in this paper, the radiation content of coal is considered as 0. The radiation intensity of roof rock is much higher than that of coal sample, and the difference can be several times or even dozens of times. Therefore, the mixing degree of coal and gangue can be identified through the difference of radiation characteristics between coal and rock; 2 The difference of radiation between different rocks is huge, so it can be realized to distinguish and identify the dirt band and roof rock in the complex structure coal seam through the difference of radiation characteristics between different rocks; 3 The radiation intensity of rock samples from the same sedimentary rock stratum in the same coal field is similar, so for the same working face or even the same coal seam, there is no need to frequently adjust the parameters in the process of using the ray coal gangue identification technology.
Basic principle of natural γ-ray coal and gangue recognition
The principle of natural γ-rays coal and gangue recognition is summarized as follows:
Based on the radiation differentiation characteristics of natural γ-rays in coal and rock, the method of low radiation level radioactivity measurement is adopted to identify the instantaneous mixing rate of coal and gangue flow in coal releasing process. Combined with the time-series characteristics of caving flow of top coal in fully mechanized caving mining and the energy spectrum characteristics of different strata, the automatic identification of coal and gangue in fully mechanized caving of thick coal seam with complex structure containing multiple gangue is realized. In the process of top coal caving, the gangue outflow from the caving-opening has a changing law from nothing to existence, from less to more. In the process of top coal caving, the natural radiation intensity in the coal and gangue mixture changes from weak to strong, and the content of gangue in the mixed flow can be determined. By detecting the instantaneous radiation intensity of the coal and gangue mixed flow to determine the rate of gangue and thus determine the time to close the coal drain. For coal beds with complex structure containing one to multiple layers of gangue, the influence of gangue inclusion on the accuracy of coal and gangue recognition can be excluded according to the different energy spectrum characteristics of gangue inclusion and the caving time sequence characteristics, which is, the coal and waste collection at different levels will be caved in the sequence of time according to the different positions from the caving-opening in the process of caving.
The schematic diagram of natural γ-ray coal and gangue recognition is shown in Fig. 4 .
Development of coal and gangue identification system
The developed mine intrinsic safe coal and gangue natural ray real-time detection system includes two parts: detector and data acquisition display terminal. The detector is composed of NaI crystal(Ф100 × 100), photomultipler, shell, data processing terminal and explosion-proof shell. The data processing terminals include coal and gangue identification system APP and intrinsic safe Android mobile phones. The data acquisition display terminal and the detector will be connected wirelessly for data transmission. During the measurement at the LTCC face under the shaft, the single chip microcomputer will be used for data analysis, processing and display. The system and related indicators are shown in Table 2 .
Due to the difference between the background radiation of the underground environment and that of the laboratory environment, parameter debugging is required before the dynamic monitoring of the coal drawing mouth to adapt to the underground radiation environment.
The detector is placed in the material chamber of 1303 working face. The chamber floor is a coal seam covered with sand and gravel. Adjust the detector detection face upward and downward once, and the bottom shall be padded 20 cm above the ground, as shown in Fig. 5 .
(1) Influence of supply voltage.
In order to test the influence of different power supply voltages of the circuit on the detection efficiency of the detector, adjust the counting conditions when the power supply voltage of the detector is 11 V and 12 V, as shown in Fig. 6 .
As depicted in Fig. 6 that under the situation that other conditions remain unchanged and only the power supply voltage is changed, the count value of the detector is relatively stable, which can verify that the circuit can work normally under the power supply condition of more than 11 v.
(2) Threshold debugging.
The threshold adjustment range is 0–1 v, and the adjustment amplitude is 0.02 v. The test results are shown in Fig. 7 .
As depicted in Fig. 7 , (1) with the increase of threshold value, the counting value of the detector shows a downward trend; (2) under the threshold value of 0–0.08 v, the counting of the detector is too large, and after exceeding 0.08 v, the counting of the detector is in the normal range; (3) the detection surface of the detector is directional, that is, the counting of the detector is not only related to the position of the detector, but also related to the direction of the detection surface.
(3) Background comparison between ground and underground environments.
In order to compare the radiation difference under different environmental conditions above and below the surface, the background counts of the detector are counted at different thresholds, as shown in Fig. 8 .
As depicted in Fig. 9 that after exceeding 0.08 v, the underground environment detector counts in the normal range, and the technology slowly decreases with the increase of threshold; In the surface environment, the count is in the normal range only when the threshold value is 0.5 v. This shows that the radiation field of the ground environment is far more complex than that of the underground environment. As the radiation content of rock is not affected by the change of detection location, the comparison between the presence of gangue and the absence of gangue will be more obvious during underground detection than that on the ground.
In order to verify the detection effect of gangue radiation in the underground environment, the static test is carried out in the material roadway chamber of the underground working face. Put the detector in the chamber, with the detection face upward and the bottom pad 20 cm above the ground.
During the test, first count the background under different threshold values (0.08–1 V), then place gangue (about 10 kg), and count again under different threshold values (0.08–1 V). Figure 9 is the specific data.
As depicted in Fig. 9 that the radiation intensity of the same pile of gangue decreases with the increase of the threshold value, and the background value also descends. Because the background radiation of the underground environment comes from the radioactive elements in the roof and floor rocks and the radioactive elements in the air, of which the radioactive elements in the roof and floor rocks account for the majority. Therefore, the energy spectrum of the background is close to that of the gangue, and the adjustment of the threshold value will affect the counts of both.
In order to determine the obvious threshold area for detection, the net count and background value of gangue placed under the same threshold are divided to obtain the counting increase of gangue placed under different threshold conditions, as shown in Fig. 10 .
As depicted in Fig. 10 that before the threshold value 0.2 V, the counting amplitude fluctuates greatly. Between 0.2 and 0.9 V, the counting amplitude increases steadily and to a certain extent. At the threshold value of 1 V, the amplitude decreases significantly. Therefore, under the underground environment, the threshold value can be set between 0.8 and 0.9 V, for effective detection efficiency.
Based on the formed identification method of coal-gangue-rock in LTCC of extra thick coal seams, and taking the LTCC working faces of Lilou Coal Mine (thick coal seam with simple structure), Xiaoyu Coal Mine (thick coal seam with single layer dirt band) and Tashan Coal Mine (thick coal seam with complex structure) as specific conditions, the on-site installation scheme of detectors is designed to test the sensitivity, signal stability and adaptability to the environment of detectors; The response characteristics of the detector to the dirt band and roof rock are analyzed to provide a basis for determining the identification parameters.
KZT12 intrinsically safe coal gangue identification detector for mining has three installation positions: under the support tail beam with the detection direction facing the coal scupper, above the rear scraper conveyor with the detection direction facing the coal flow, and under the support tail beam with the detection direction facing the coal scupper, as shown in Fig. 11 A–C. | Result
Application of coal gangue identification system in simple structure thick coal seam fully mechanized caving face
Working face 1303 (coal seam 3) of Lilou Coal Mine is located in the middle and lower part of Shanxi Formation. Most of the coal seams (coal seam 3) are relatively stable with simple structure. The average thickness of the coal seam is 6.98 m, the mining height is 3.6 m, the drawing height is 3.38 m, and the average dip angle of the coal seam is 13°. The immediate roof of the coal seam is sandy mudstone with a thickness of 0.96 m, and the basic top is fine sandstone with a thickness of 15.12 m.
KZT12 intrinsically safe coal gangue identification detector is installed under the tail beam of the support, with the detection direction facing the coal chute, as shown in Fig. 10 A.
The radiation data and filtering data (Kalman filtering method) of coal gangue monitored during coal drawing are shown in Fig. 12 .
As depicted in Fig. 12 that during top coal drawing, the detector has a relatively sensitive response to the occurrence and content of gangue. The detected radiation data has obvious periodicity, which can be divided into two stages: stage of pure coal and mixing stage of top coal and gangue. In the pure coal stage, there is pure coal near the caving-opening. Since there are almost no radioactive nuclides in the coal, the radiation curve of the radiation data detected by the detector fluctuates near the background value, but the fluctuation range is small. As the top coal being exhausted, the immediate roof rock gradually mixes into the caving-opening. At this stage, the radiation intensity increases significantly, indicating that a large amount of gangue is mixed,, the caving-opening operation is terminated in combination with the basis of “close the caving-opening when see the gangue”, The radiation curve gradually decreases and returns to the background level with the closing of the caving-opening.
Application test of coal gangue identification system in LTCC face with single layer thick coal seam of dirt band
The coal seam of Working Face 8202 in Xiaoyu Coal Mine has a stable occurrence with little change. The thickness of the coal seam is 9.2–10.2 m, with an average thickness of 9.7 m. The top coal contains a layer of dirt band. The coal cutting height of the shearer is 3.2 m, the coal drawing height is 6.5 m, and the mining drawing ratio is 1:2.03.
KZT12 intrinsically safe coal gangue identification detector is installed above the rear scraper conveyor, with the detection direction facing the coal flow, as shown in Fig. 10 B.
The radiation data and filtering data (Kalman filtering method) of coal gangue monitored during coal drawing are shown in Fig. 13 . For the convenience of analysis, the occurrence of dirt bands in the top coal is compared with the data detected at the caving-opening.
As depicted in Fig. 13 that during top coal drawing, the detector has a relatively sensitive response to the occurrence and content of dirt band/roof rock. The process of top coal drawing lasts for 200 s, and there is a radiation wave peak. The radiation data detected has obvious periodicity, which can be divided into three stages: pure coal stage, gangue mixing stage and immediate roof rock mixing stage. In the pure coal stage, there is pure coal near the caving-opening. Since there is almost no radionuclide in the coal, the radiation data detected by the detector is at the same level with the background, that is, the radiation intensity fluctuates within 42–78 cps within 0–70 s after the caving-opening is opened, and there is no obvious upward or downward trend.
With the process of top coal drawing, the gangue containing in the top coal will reach the coal drawing opening. At this stage, the data detected by the detector will show an upward trend. With the full discharge of the gangue, the data detected by the detector will gradually decline, that is, in 70–95 s, the radiation intensity will first rise from 55 to 98cps, and then gradually decline; In the following 95–160 s, the coal drawing is a pure coal stage, and the radiation intensity is relatively stable; As the top coal is exhausted, after 160 s, the immediate roof rock starts to enter the coal drawing opening. At this stage, combined with the coal drawing basis of “see the gangue then close the caving outlet”, the coal drawing operation is terminated. The radiation curve gradually decreases and returns to the background level as the coal drawing opening is closed.
Application test of gangue identification system in thick seam with complex structure
The average thickness of 8205 LTCC coal seam in Tashan Coal Mine is 15.09 m, the mining height is 3.8 m, the caving height is 11.29 m, and the mining caving ratio is 1:2.97. The cycle progress is 0.8 m, and the coal drawing step is 0.8 m. The coal seam contains 4–8 layers of dirt bands, with an average of 6 layers. The thickness of a single layer varies from 0.23 to 0.65 m. The lithology of the dirt bands is magmatic rock, sandy mudstone, mudstone, carbonaceous mudstone, and kaolinite. Most of the upper part of the coal seam is metamorphosed and silicified due to lamprophyre intrusion.
KZT12 intrinsically safe coal gangue identification detector is installed under the tail beam of the support, with the detection direction facing the coal chute, as shown in Fig. 10 C.
The radiation data and filtering data (Kalman filtering method) of coal gangue monitored during coal drawing are shown in Fig. 14 . For the convenience of analysis, the occurrence of dirt bands in the top coal is compared with the data detected at the coal drawing hole.
As depicted in Fig. 14 that during coal drawing, the detector has a relatively sensitive response to the occurrence and content of dirt band/roof gangue. The process of coal drawing at the caving-opening lasts for 220 s, and there are four radiation peaks. Due to the existence of dirt bands, the radiation data detected has obvious periodicity, which can be divided into three stages: pure coal stage, dirt band mixing stage and immediate roof mixing stage. The dirt band mixing stage can be subdivided according to the dirt band position. At the same time, the radiation performance of different lithology is different. The numbers of 1 in the figure are the numbers of the dirt bands in the top coal, which are sorted from bottom to top. In the pure coal stage, there is pure coal near the coal cave outlet. Since there is almost no radionuclide in the coal, the radiation data detected by the detector at this stage is equal to the background, that is, the radiation intensity fluctuates within the range of 40–87 cps within 0–35 s after the coal cave outlet is opened, and there is no obvious upward or downward trend.
As the top coal caving process enters the mixed stage of coal and gangue, and the dirt band in the 1st layer of top coal reaches the coal caving outlet first. At this time, the data detected by the detector shows an upward trend. With the exhaustion of the 1st layer of dirt band, the data detected by the detector gradually decreases. That is, the radiation intensity first increases from 55 to 138 CPS and then gradually decreases within 35–58 s. Then the dirt band in layer 2 arrives at the coal caving outlet, and the data detected by the detector shows an upward trend again. Because the distance between the dirt band layer 2 and the dirt band layer 3 is relatively close, the time periods for these two layers to enter the top-coal caving-opening. Therefore, there is no obvious feature in the entry sequence on the radiation intensity curve, that is, for the 63–107 s period, the radiation intensity curve first slowly rises from 71 to 143 cps during the 63–95 s period,, and then decreases from 143 to 79 cps during the 95–107 s period, It can be found that the main reason for the long duration of the rising section during the mixing of the whole dirt band layer 2 and the dirt band layer 3 is that the mixing amount of the dirt band layer 1 is gradually decreasing, and the mixing amount of the dirt band layer 3 is gradually increasing as the mixing amount of the dirt band layer 2 is gradually decreasing; The main reason for the short duration in the descending section is that the dirt band layers 1, 2 and 3 are decreasing, and even the dirt band layers 1 is no longer discharged through the coal caving outlet; At about 110s, the dirt band 4 is mixed in. During 110–137 s, during 110–115, the radiation intensity fluctuates within the range of 85cps to 107cps without obvious upward or downward trend, It is analyzed that at this time, the mixing amount of the dirt band layer 2 and 3 decreases gradually, while the mixing amount of the dirt band layer 4 increases gradually, resulting in no obvious upward or downward trend of the radiation intensity; During 115–122 s, the radiation intensity curve rises from 66 to 134 cps. The analysis is that at this time, the mixing amount of the dirt band 4 increases to the maximum, while the mixing amount of the dirt band 2 and 3 is very small; Then, during 122–138 s, the radiation intensity curve decreased from 136 to 71 cps, which was analyzed as a result of the gradual decrease in the mixing amount of the dirt band 4.
From 140 s to the end of 220 s coal drawing, the radiation intensity curve shows a slow upward trend. According to the comprehensive histogram of the working face and the histogram obtained by drilling before the experiment, The coal in the range from the dirt band layer 4 in the top coal to the top plate is mostly metamorphosed and silicified due to the intrusion of lamprophyre. This part of coal seam is defined as the dirt band layer 5, and this coal drawing stage is defined as the mixing stage of lamprophyre/ immediate roof, Therefore, with the mixing of this part of coal seams, the radiation intensity curve has an obvious upward trend at this time, and the recovery value of this part of coal is already small, so coal drawing will stop.
In the mixing stage of the dirt bands, due to the different location, thickness and lithology of the dirt bands in each layer, they show different radiation characteristics during the releasing process: 1 The closer the position of the dirt band is to the caving-opening, the earlier the time of the caving-opening, which follows the time sequence of the top caving-opening; 2 When the thickness of the dirt band is relatively large, the radiation monitoring curve corresponding to the mixing of the dirt band in the process of releasing is obviously rising and then slowly falling after maintaining the intensity for a certain time, That is, the dirt band is thick and will last for a period of time after being mixed into the coal drawing flow. When the dirt band is thin, the radiation monitoring curve corresponding to it shows a fluctuation phenomenon of one or more sections rising and then falling immediately from the mixing of the dirt band to the end during the releasing process, and the wave crest height of each section of the fluctuation has a certain difference, That is, the gangue layer is thin, which can not be continuously mixed into the coal drawing flow, and there is intermittent phenomenon, that is, the influence of different thickness of gangue on the radiation monitoring data is different, that is, the peak width of the monitoring curve is different; 3 The radiation intensity of the gangue with different lithology is different, so the influence on the radiation monitoring data is different during the mixing process. That is, the peak height of the monitoring curve is different. 4Two layers of dirt bands near each other will be mixed and overlapped.
In conclusion, the KZT12 intrinsically safe coal gangue identification detector has a high sensitivity to the amount of gangue discharged at the coal caving outlet, and can determine the gangue mixing ratio at the top coal caving process in real time. | Discussion
LTCC mining technology is one of the main technologies for safe and efficient mining of extra-thick coal seams in China. At present, LTCC mining technology has been applied in most of the extra-thick coal seams in China, and has made breakthroughs and development in theory and key engineering application. However, the top-coal caving process in Full-mechanized top-coal caving still relies on manual control according to the principle of " close caveing-opening when see gangue", which is an artificial process. It is difficult to avoid the situation that resources are wasted and the quality of coal is affected due to over-or less-drowning in the process of top-coal caving. Moreover, the number of top-coal caving supports in the fully-mechanized top-coal caving face is large, the working environment of the top-coal caving procedure is poor, and the labor intensity and working efficiency of the manual control of the top-coal caving-opening are high.
In recent years, the intellectualization of coal mines has been developed rapidly. Taking intelligent fully mechanized mining as the technical core, it has improved the intelligent level of coal mines and provided technical support for the high-quality development of the coal industry. The development of intelligent fully mechanized mining technology has promoted the research on intelligent technology of LTCC. Because coal gangue identification is the key technology to realize automatic top coal caving and LTCC intelligent mining, domestic experts, scholars and scientific research institutions have carried out a lot of research work for this purpose and made gratifying progress. However, the instability of coal seam thickness and the existence of dust, dust falling water mist, brightness, space noise and other complex environments in the coal drawing space have brought great difficulties to the accuracy and reliability of coal gangue identification, which is also the reason why continue research have been carried out. For more than 10 years, the has successively carried out the application of near-infrared ray, dual energy γ Ray and nature γ based on the analysis and comparison of its reliability and feasibility, the coal gangue recognition based on low level radiation γ ray is proposed. The method of ray coal gangue recognition has been studied theoretically and experimentally, and tested and analyzed on the spot in Lilou Coal Mine, Tashan Coal Mine and Xiaoyu Coal Mine. The preliminary application shows that the proposed method and equipment for identifying coal gangue based on natural gamma ray can fully meet the requirements of real-time monitoring of coal gangue mixing degree at the top coal drawing process. Of course, the prerequisite for the application of coal gangue recognition method based on low level radiation γ ray is that the immediate roof rock needs to contain certain radioactive elements. According to the research in section ‘Radiation characteristics of coal and rock strata in typical thick coal seam mining areas in China’ of this paper, it has been proved that radioactive elements exist in the immediate roof of most thick coal seams in China.
Based on the complex conditions of the top coal caving opening in the LTCC mining face, in order to fully apply this technology to the top coal caving face, the author believes that the later research should focus on the following two points: (1) Installation position of coal gangue identification detector. Because of the different positions of detectors, first of all, the detection range of the detectors to the coal drawing mouth is different, and because of the influence of the floor and the gangue in the goaf, the noise source of the detectors also needs to be analyzed. Second, the monitoring range of the top coal is different due to the different positions of the detectors, which relates to the number of detectors installed and the way they cooperate with the electro-hydraulic control system. (2) The shape and size of the detector. In view of the complexity of the working environment of the coal chute and the space limitation, in order to achieve a good detection effect, in addition to the installation position, the size of the detector should also meet the detection requirements as far as possible. At the same time, in order to meet the safety requirements, in combination with the special structure of the hydraulic support, it is necessary to customize the detector with a special shape suitable for the coal chute space, which brings research challenges to the design and fabrication of detectors and the acquisition of signals. The team has carried out research on the above two points and believes that in the near future, better results will be displayed. | Conclusions
Among the common gangue and roof rocks, the radioactive intensity of carbonaceous mudstone is the highest, followed by sandy mudstone and kaolin mudstone, while the radioactive intensity of lamprophyre is relatively minimum. When the gangue and the immediate roof lithology are the same, the radioactive intensity is similar. Coal has the lowest radioactivity. Through the analysis of the radiation intensity of the coal seam and roof in the typical thick seam LTCC mining face, it is feasible to use the natural γ ray method to identify the coal and gangue. Based on the radiation characteristics of natural γ rays in the process of releasing coal and gangue in extra thick coal seams and the change law of radiation intensity value when the immediate roof is mixed γ value of radiation intensity is taken as the identification parameter, and the identification method of coal and gangue in LTCC mining of extra thick coal seam is proposed. The KZT12 intrinsically safe coal gangue identification detector developed for mining has the ability to monitor the different mixing degrees of coal and gangue mixture in real time, and has good applicability to the identification of coal and gangue in the process of fully mechanized caving mining of extremely thick coal seams with complex structures. Because of nature γ ray has a strong penetrability, the detector is less affected by water mist, dust, light, etc. during mining, and has strong environmental adaptability, and can realize volume monitoring. The automatic recognition method and system of coal and gangue in extra thick coal seams have been formed. The field test and analysis of coal gangue recognition have been carried out in the fully mechanized caving mining working faces of Lilou Coal Mine, Xiaoyu Coal Mine and Tashan Coal Mine, and remarkable technical effects have been achieved, laying a foundation for further research and application of intelligent fully mechanized caving mining in extra thick coal seams under different conditions. | To address the technical limitations of automatic coal and gangue detection technology in fully mechanized top coal caving mining operations, the low radiation level radioactivity measurement method is utilized to assess the degree of coal-gangue mixture in top coal caving process. This approach is based on the distinguishing radiation characteristics of natural γ-rays between coal and gangue. This study analyzed the distribution characteristics of natural γ-rays in coal and rock layers of thick coal seams and the applicability of this method, introduced the basic principle of coal-gangue detection technology based on natural γ-ray, developed the test system about automatic coal-gangue detection, studied the radiation characteristics of coal and gangue, proposed determination model of the coal-gangue mixed degree, combined with the time sequence characteristics of the top coal’s releasing flow and the energy spectrum characteristics of different layers of rock, realized the precise coal-gangue detection technology in complex structure thick coal seam with multiple gangue. Field tests were conducted in Lilou, Xiaoyu and Tashan Coal Mine. The test results were well corroborated with the research results and achieved the expected results, which laid the foundation for the field application of intelligent coal mining.
Subject terms | Acknowledgements
We would like to thank all the authors of this article for their work and the relevant leaders and technicians of Tashan, Xiaoyu and Lilou Coal Mines for their support and help.
Author contributions
C.L. put forward the research ideas and methods, N.Z. carried out the theoretical analysis and experiment. All authors wrote and reviewed the manuscript. All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.
Funding
Financial support for this work, provided by the Natural Science Foundation of China (Grant No. 52174137), China Postdoctoral Science Foundation (Grant Nos. 2020T130697, 2019M661994), and State Key Laboratory of Mining Response and Disaster Prevention and Control in Deep Coal Mines Open Foundation (Grant No. SKLMRDPC20KF13), are gratefully acknowledged.
Data availability
The original contributions presented in the study are included in this article, further inquiries can be directed to the corresponding author.
Competing interests
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. | CC BY | no | 2024-01-15 23:41:59 | Sci Rep. 2024 Jan 13; 14:1276 | oa_package/25/aa/PMC10787781.tar.gz |
|
PMC10787782 | 38218912 | Introduction
For many years, laser surgery has been an accepted tool in various surgical fields 1 . The use of lasers in hospitals is also increasing 2 . The advantage of lasers is that they can achieve results similar to those of conventional surgery while being minimally invasive 3 , 4 . At the same time, laser surgery usually has a high healing potential, with less post-operative inflammation and swelling 5 . The coagulation effect of the laser radiation improves visibility by coagulating small blood vessels 6 . While these benefits are widely accepted, the practical advantage is still debated. For example, Seifi and Matini 7 showed in a small meta-study that there was no benefit from cutting soft dental tissue with a laser. They conclude: “Introducing an appropriate laser with suitable wavelength, input power and other properties for mentioned indications needs more research and clinical trials”.
This shows that there is still a lot of research to be done. Even in recent studies, the types of lasers used are still under investigation 8 . For a more general view, it is important to look more closely at the laser-material interaction to differentiate the regimes of material ablation. Boulnois 9 distinguishes between vaporisation, photoablation and photodisruption. These mechanisms play an important role in the efficiency of laser surgery. For example, Werner et al. 10 showed that a laser could achieve more than ten times the ablation rate of the frequency doubled laser at 532 nm.
Sánchez et al. 11 compared a laser with an Er,CR:YSGG laser on gingiva. While the laser cut faster and without bleeding, the Er,CR:YSGG allowed a faster healing time. A slower healing ability of the laser was also found in later studies 12 . However, a meta-analysis from 2019 13 shows that a subgroup analysis for the type of laser cannot be done because the amount of data is too small. Furthermore, Protásio et al. 13 conclude “... that labial frenectomies performed with high-intensity surgical lasers are faster and offer a better prognosis in terms of pain and discomfort during speech and chewing than those performed with conventional scalpels”, if publication bias is not taken into account. A later randomized double-blinded and controlled pediatric clinical study in 2021 by Fioravanti et al. 14 with obstructive sleep apnea syndrome (OSAS) was performed. There, a milli second pulsed laser with a wavelength of 980 nm strongly decreased the OSAS compared to the control group contrasting the result from the review by Protásio et al. 13 . Also, a case report in 2023 showed a good outcome for healing of osteonecrosis of the jaw 15 by laser treatment with a 980 nm laser. Moreover, a review from Lesniewski et al. 16 reveals that “... diode lasers and LEDs are equally effective tools for the phototherapy in periodontology and oral surgery”. Therefore, currently no firm conclusion can be drawn regarding whether laser surgery and which laser type result in a better surgery performance in the oral cavity. However, lasers in the near infrared range seem to be advantageous 16 .
While this was an example of labial frenectomies, one of the main drawbacks of some studies is the statistics with too few samples and examined parameters. For example, the highly cited study by Cercadillo-Ibarguren et al. 17 varied the four parameters laser power, laser type, air spray and pulsed laser operation with 117 samples. It was concluded that Er,CR:YSGG lasers performed well, while diode and lasers did not perform well. However, other authors state that a laser rarely causes any unwanted tissue damage when used correctly 18 – 20 .
While this is obviously a contradiction, the important question is: why are there differences in the results? To investigate this question, various parameters are collected from the literature. A study by El-Sherif and King 21 for a laser with a wavelength of 2 m showed that the pulsed mode results in less damage of soft tissue than the CW mode of the laser. In addition, the heat affected zone ranges from 120 to 160 m for the pulsed laser mode and 400–800 m for the CW mode 21 . For bone tissue, the heat affected zone is for a laser only 6 m in comparison 22 . Similar results were found for an Er:YAG laser with a wavelength of 2.94 m. While the heat affected zone for the q-switched laser was 5–10 m, it increased to 10–50 m for the spiking mode with longer pulse durations 23 . Furthermore, the results did not change for different tissue types including soft and hard tissue 23 . It can also be concluded from the results of Krapchev et al. 24 that the pulse frequency and duration should be below the thermal relaxation time to prevent unwanted tissue damage. This leads to the first two potential influencing parameters: wavelength and pulse duration.
Another factor to consider is cooling of the tissue. Ivanenko et al. 22 were able to show that small amounts of water prevented carbonisation, while larger amounts of water showed no further improvement. This is due to the fact that the water evaporates when too much heat is present, leading to a cooling effect 25 . More specifically, the additional water creates a temperature gradient from the water to the tissue, causing the heat to flow to the water rather than the surrounding tissue 26 . For air cooling, the results are contradictory: while Ivanenko and Hering 27 claim that pure gas cooling leads to more thermal damage, Afilal 28 claims the opposite. This leads to the next two potential influencing parameters: water cooling and air cooling.
Finally, the laser parameters should be considered. While the laser power is an obvious influencing parameter, the effect of scan speed could be shown by Afilal 28 . He was able to show that carbonisation could be prevented by using the correct scanning parameters. In another case, the temperature increase could be limited to 30 K instead of 400 K by using a scanning technology 29 . It is also known from the field of laser material processing (e.g. welding, cutting and additive manufacturing with lasers) that the line energy is an important parameter. This leads to the final potential influencing parameters: laser power, laser scan speed and laser line energy.
In total, seven potential influencing parameters that affect the laser surgery process can be identified from the literature: wavelength, pulse duration, water cooling, air cooling, laser power, laser scan speed and laser line energy. To quantify the quality of the cut, the previously published scoring system 30 was used. With the help of the scoring system, all previously mentioned parameters, except wavelength and type of cooling, will be investigated in this study because studying the influence of wavelength would require too many different lasers and water cooling is known to result in a significant improvement in cut quality at the cost of increased complexity 31 , 32 and longer cutting times. In addition, the number of breaks between laser scans and the effect of an exhaust system are studied. The number of breaks is used to study the effect of heat dissipation on the cut. The exhaust parameter is taken into account as it may be important to remove the plumes from the laser surgery process. All these parameters are compared to the effect of the inter animal variation as a benchmark for their importance. | Methods
The method section consists of four sections. In the first two sections, experimental procedure including the origin of the tissue and the statistical analysis is explained. In the following two sections, the varied parameters are discussed in detail.
Experimental procedure
In this section, the set-up and the experimental parameters that are not altered in this study are explained. The experimental setup is shown in Fig. 1 . The used laser is from MICROSTORM (FEHA LasterTec GmbH, Germany) with a wavelength of 10.6 m. It is a diffusion cooled -laser with an acousto-optical modulator. The modulator is driven by an external pulse generator (DG1032Z, Rigol Technologies).
The experimental parameters are divided into constant and variable parameters. In this section only the constant parameters are explained and summarized in Table 1 . The temporal shape of the laser is rectangular. The focal distance is 13.7 cm from the lens to the tissue sample. The focal diameter is 258 m. Furthermore, the deflection of the laser beam is realised by a scan head (Scanlab GmbH). This allows the laser beam to be moved at a controlled speed of the sample. The temporal laser profile is set to be rectangular to ensure that the same light intensity is always applied to the tissue surface. A total of 1 cm long incisions were cut. There are 20 scans for each incision with a scan time of 10 ms. This leads to a cutting depth of up to 4 mm. If breaks are made between scans, the break time is 2 s. If the number of breaks is one, a break is done after 10 scans, if it is 3, a break is done after each 5 scans and if the number is 19 a break is done after each scan. The laser comes perpendicular to the surface of the table, However in practice, the laser may not be entirely perpendicular to the tissue as it is not flat. This can be seen in Fig. 1 . Hence, the incident angle of the laser is seen as to be approximately zero degree.
For all experiments, bisected pork tissues of food quality were purchased from the local butcher. Therefore, an ethical proposal for animal experimentation is not required. The samples used were 1 cm thick pieces of pork from the topside. The laser power of 235 W for 0.2 s produced the highest energy input observed in all experiments − 47 J . As a result, the sample size was always sufficiently large compared to the heat affected zone so that the size of the tissue had no effect on the heat dissipation.
Table 2 provides an overview about the number of samples. There are five repetitions for each animal sample and parameter setting for the non-laser parameters, resulting in a total of 30 data points (n = 30) per parameter setting. The experiments for the laser parameters include 3 different animal samples with 10 repetitions each, so there are 30 data points (n = 30) per parameter setting.
Statistical analysis
Experiment A
For the quantitative analysis all cuts scored on the scoring system presented in a previous paper 30 from our group (especially Fig. 2 provides an good overview). In short, by looking at tissue damage at the rim or the cutting front of the cut, a score from 1 to 5 is given, where 5 is the best possible score. The scoring is performed based on the presence/amount of carbonization and the colour of the tissue. Figure 2 shows examplary scores for different cuts. It should be noted that a score of 3–4 already denotes a pretty good cut in comparison to cuts from actual procedures such as in Fig. 4 b from Vanderhem et al. 33 . The cut shown there would be scored as 2. As it could be shown that the scoring of the cutting front is more reliable 30 , the complete analysis in the study is based on the scores of the cutting front.
For each data set, the effect of the variable is presented and the samples are compared for significant difference. All commands are taken from SciPy 34 and the names in parentheses indicate the corresponding SciPy command. The difference between the samples is tested with the Wilcoxon–Mann–Whitney-Test (“scipy.stats.mannwhitneyu”) as the scoring is a ordinal scale. As multiple comparisons are performed, the significance levels are reduced to 0.01 (*), 0.001 (**), 0.0001 (***) and 0.00001 (****). Afterwards, an analysis of variance (ANOVA) is performed with the help of the statsmodel framework 35 .
Influence non-laser parameters
The aim of this part is to evaluate the influencing parameters considered relevant in the literature, except the laser parameters. An overview of the varied parameters is given in Table 3 . In order to reduce the number of experiments required, two sets of experiments have to be performed: In experiment A, the effect of air cooling, number of breaks and pulse duration is investigated. In experiment B, the effect of exhausting the plumes is investigated. In both data sets, each parameter combination is repeated five times on six different animals. To conduct the analysis, a benchmark is required to determine the relevance of a given parameter. This can be established by examining the variance in the effects of cutting different animals as for a practical applications, the effect of cutting different animals has to be lower than the effect of a given parameter. By attributing how much variance of the different animals is affecting the scoring, a benchmark is generated that takes into account the tissue’s storage and pre-treatment.
The experiments in experiment A are performed with two laser powers ( P ): W (irradiance = 160 ) and P = 129 W (irradiance = 247 ), in order to evaluate all non-laser parameters for a more optimal case (83 W) and a case with more tissue damage (129 W). For the first intensity, it is tested if the parameters can worsen the cut and in the second case, if the tested parameters can lead to optimal laser cuts. For all experiments, the scan speed is kept constant at 1 , resulting in an illumination time of 10 ms. Furthermore, for both laser powers a separate ANOVA analysis is performed.
For experiment B, only the laser power of 129 W and the pulse duration of 964 ns are used. The parameters number of breaks and air cooling are varied as in A. The number of parameters is reduced to decrease the number of experiments required. The pulse duration is excluded as it is a laser parameter, and the laser power of 129 W is chosen to evaluate if the exhaustion of the plumes can lead to an improvement of a non-optimal laser cut.
Influence laser parameters
The aim of this part is to evaluate the influencing laser parameters. An overview of the parameters is provided in Table 4 . For parameters, the line energies ( ) are chosen to be: 83 , 172 and 235 . The line energy is calculated as followed: where is the scan speed. A total of 270 cuts are evaluated for three animals: 90 for each line energy. For each line energy, 3 laser powers/scanning speeds are selected. Thus, each parameter combination is performed ten times. | Results
Each section of the results is divided into two parts. The first part presents the means and compares the significance of the data sets. The second part presents the results of the ANOVA analysis.
Influence non-laser parameters
Experiment A
Figures 3 and 4 show the average effect and the different parameters with standard deviation and significance levels. For the laser power of 83 W, the cutting quality is overall higher as the cuts with a laser power of 129 W. This is due to the fact that the laser power of the cuts with 129 W is chosen to be too high to see the possible improvements by the different parameters. In both cases, a higher number of breaks reduces the unwanted tissue damage. Also for both pump powers, air cooling and different pulse durations show no or little significance and variation.
For 83 W, a shorter pulse duration seems to reduce the cutting quality. However, the significance is small. The air pressure seems to reduce the cutting quality, but it is not a significant influence. The already good results could be improved by giving the laser more breaks.
For 129 W, there is no significant effect of pulse duration. As for the 83 W, the air pressure seems to reduce the cutting quality a little. This effect is slightly significant. As the laser power was too high, the increase in the number of breaks shows a strong increase in the cutting quality. This could be due to the fact that the heat has more time to diffuse into the surrounding tissue, resulting in less heat accumulation. As more cuts reach five points, even the coagulated tissue is decreased.
Tables 5 and 6 show the results of the ANOVA for a laser power of 83 and 129 W, respectively. Overall, the results are similar to the previous analysis. The effect of breaks is comparably large and highly significant. In addition, the effect of inter-animal variation is highly significant and can explain more of the variance than pulse duration and air cooling. Therefore, in addition to the conflicting significance, the effect of these two parameters can be discarded as they explain less variance. It should also be noted that and especially the adjusted is relatively low. Thus, most of the variance cannot be explained.
For 83 W, air cooling shows no significant effect and the explainable variance is low. While the pulse duration shows a significant effect, the explainable variance is lower than for the inter-animal variation. Thus, the normal inter-animal variation masks the effect of both parameters. It should also be noted that the adjusted is only 0.38.
For 129 W, the effect of the air cooling and the pulse duration are the opposite than for 83 W. The pulse duration shows no significant effect and the explainable variance is low. While air cooling shows a significant effect, the explainable variance is comparable to the inter-animal variation. For this case, the adjusted is 0.45.
In summary, it can be concluded that, contrary to the literature, pulse duration has no or a contradictory effect on cutting quality. Furthermore, its effect on the final result is less than the inter-animal variation. Therefore, its effect can be ignored. Air cooling also has little or no effect on cutting quality. Therefore, air cooling should not be used, also in the interest of reducing the complexity of the set-up. The number of breaks between cuts shows a strong effect and significantly improves the results. In addition, the number of breaks leads to much stronger effects than the inter-animal variation. This parameter should therefore be taken into account. For practical application, however, there is a conflict of objectives. On the one hand, tissue damage should be low. This is favoured by a high number of breaks. On the other hand, the operation should be fast. This is favoured by a low number of breaks.
Experiment B
Figure 5 shows the average effect and the different parameters with standard deviation and significance levels. The results of the breaks and the cooling by air are almost identical to the results for 129 W in experiment A. Hence, the results of this study are reproducible. The plume exhaust parameter shows no significant effect. These results are also supported by the ANOVA in Table 7 . The significance of the number of breaks is high and it explains a large amount of variance. At the same time, exhaust and air cooling have a minimal effect on the explainable variance and the significance level is much lower than for the number of breaks. Therefore, both air cooling and exhaust do not play a major role. Both parameters can be adjusted according to other requirements of laser surgery.
Influence laser parameters
Figure 6 shows the effect of line energy and laser power or scan speed, respectively. It can be seen that the line energy has a strong influence on the resulting cut quality. The laser line energy at smallest value 83 , gives the best cutting quality. The scan speed of 1 gives the best results regardless of the laser power and line energy. Thus, it appears that for a given line energy, the scan speed has a more significant effect than the laser power. For the comparable optimal line energy of 83 , the scan speed has only a small effect. For less optimal parameters, a non-optimal scan speed can further reduce the quality of the cut. The strongest effect of the scan speed occurs when the scan speed is low, whereas at higher scan speeds, the influence of the scan speed becomes insignificant. In summary, line energy is the most important parameter and scan speeds at 1 or higher are preferred.
These overall results are supported by the ANOVA analysis which is shown in Table 8 : all two laser parameters are significant and the line energy explains most of the variance. The explained variance is higher than for any of the non-laser parameters. As both parameters are also highly significant, it can be concluded that the laser parameters are the most important parameters. The effect of laser power/scan speed is analysed again separately for each line energy to clearly show whether laser power/scan speed or line energy is the more important parameter. It should be noted that the parameters of scan speed and laser power cannot be separated as they are indirectly proportional. The importance has to be concluded from Fig. 6 .
Tables 9 , 10 and 11 show the effect of laser power/scan speed for line energies of 83 , 172 and 235 . For an optimal line energy of 83 (Table 9 ), the effect of laser power/scanning speed is hardly significant and only 11% of the variance can be explained. This effect is supported by the low value of . Therefore, for an optimal line energy, laser power and scan speed are not important.
These results change when a non-optimal line energy is used, as shown in Tables 10 and 11 . In this case, the effect of laser power and scan speed becomes significant and explains a higher percentage of the variance. The value increases in this case. All three effects increase when the line energy is further away from the optimal value. Nevertheless, the optimal choice of laser power or scan speed can only improve the cut quality to a certain extent, which is limited by the non-optimal line energy. In other words: the line energy determines the maximum achievable cut quality.
Limitations
The main limitation of this study is the fact that all experiments were performed in an ex-vivo setting. This clearly limits the generalisability of the results presented as important parameters such as time to heal cannot be assessed. However, the chosen ex-vivo setting allows the study of a large number of laser cuts and, due to the lack of perfusion, tissue damage may appear easier. Therefore, it is possible that parameters from the ex-vivo setting can be transferred to in-vivo.
The second limitation is that only one type of tissue is used for all laser cuts. While this is a requirement of this study to really understand the effect of different parameters, it does limit the generalisability. It is likely that the results can be transferred to at least some types of soft tissue, but generalisation to hard tissue such as bone is not possible without further experiments. However, with the current approach demonstrated, it may be possible to find a parameter setting that allows bone cutting with a laser.
A third limitation is that pig tissue is used instead of human tissue. Thus, the transferability is limited. However, the parameters tested show a much stronger effect than the inter-animal variation. Therefore, it is expected that the presented results are at least partially transferable to other species.
The fourth limitation is the size of the cuts and fixed laser focus. All cuts had a length of 1 cm and a depth of up to 4 mm. This leads to two conclusions. Deeper cuts are expected to result in more tissue damage. Therefore, the optimal parameters may be different or even have to be adjusted during laser surgery or in other words the optimal laser parameters might be depth dependent. Secondly, the fixed laser focus might have the effect of limiting the cutting depth. Hence, the cutting depth could be greater with an adjusted focus position. | Conclusion
In this study, the parameters of pulse duration, laser power, laser scan speed, line energy, air cooling, exhaust system and number of breaks between cuts were investigated. Among these parameters, line energy is the most important. It determines the maximum cutting quality that can be achieved. If the correct line energy is known, the scan speed and the number of breaks are similarly important parameters. Interestingly, the scan speed of about 1 is optimal for all line energies tested. For the number of breaks, it can be said that the more breaks there are, the more time the heat has to dissipate and the less unwanted tissue damage there will be. However, the choice of the break numbers should consider the practical requirement. A lower number of breaks is required to speed up the laser surgery process. As the line energy has to be optimized and the scan should be around 1 , the laser power is fixed. All the other parameters (pulse duration, air cooling, exhaust system) are not relevant for the laser surgery with a .
This leads to the following conclusions for the use of the laser in soft tissue. Because of the importance of line energy and scan speed, the laser is not suitable for a manual handheld device. The laser should be used with a robot or remote system to achieve high quality cuts. Nevertheless, it is possible to use a laser for tissue ablation. Therefore, the laser is still an attractive laser for some applications 33 , 36 , 37 . However, the laser shines under remote operation conditions. As the pulse duration of the laser has no effect on the cutting quality, any pulse duration can be used. However, this result cannot be transferred to near infrared laser types, as the laser is superficially absorbed, unlike e.g. Nd:YAG lasers operating at 1064 nm. However, for frequency doubled or tripled Nd:YAG lasers or Er:YAG lasers, the absorption is also comparably high which is caused by hemoglobin for the frequency doubled or tripled Nd:YAG lasers and water for the Er:YAG lasers. Hence, it makes sense to investigate if the pulse duration may be irrelevant for the cutting quality for these lasers. The small effect of the air cooling and exhaust system parameters can probably be generalised to most other laser types. Therefore, air cooling is not required and the exhaust system can be adapted to other requirements of the laser surgery system. | In recent years, the laser has become an important tool in hospitals. Laser surgery in particular has many advantages. However, there is still a lack of the understanding of the influence of the relevant parameters for laser surgery. In order to fill this gap, the parameters pulse frequency, use of an exhaustion system, air cooling, laser power, laser scan speed, laser line energy and waiting time between cuts were analysed by ANOVA using inter-animal variation as a benchmark. The quality of the cuts was quantized by a previously published scoring system. A total of 1710 cuts were performed with a laser. Of the parameters investigated, laser power and scan speed have the strongest influence. Only the right combination of these two parameters allows good results. Other effects, such as the use of pulsed or continuous wave (CW) laser operation, or air cooling, show a small or negligible influence. By modulating only the laser power and scan speed, an almost perfect cut can be achieved with a laser, regardless of the external cooling used or the laser pulse duration or repetition rate from CW to nanosecond pulses.
Subject terms
Open Access funding enabled and organized by Projekt DEAL. | Supplementary Information
| Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-024-51449-1.
Acknowledgements
The authors gratefully acknowledge funding of the Erlangen Graduate School in Advanced Optical Technologies (SAOT) by the Bavarian State Ministry for Science and Art.
Disclosure
In the writing process during the preparation of this work, the authors used “DeepL Write” in order to improve language and readability. After using this tool, the authors reviewed and edited the content as necessary and they take full responsibility for the content of the publication.
Author contributions
M.H. conceptualized the research, did the data analysis, prepared the manuscript and supported D.K. in the lab work. D.K. did the experimental work did part of the data analysis. D.N., M.S. and A.G. supported the data analysis. M.R. specified the work from the medical point of view. F.S., F.K. and M.S. guided the general research strategy. All authors reviewed the manuscript.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Data availability
All data analysed during this study is included in the supplementary information files.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:59 | Sci Rep. 2024 Jan 13; 14:1263 | oa_package/d5/11/PMC10787782.tar.gz |
|
PMC10787783 | 38218953 | Introduction
Pilot-wave theory (also called de Broglie-Bohm theory or Bohmian mechanics) is a realist, nonlocal formulation of quantum mechanics originally presented in the 1927 Solvay conference by de Brogile 1 , 2 . In 1952, Bohm showed how the theory solves the vexed measurement problem in orthodox quantum mechanics by describing the measurement apparatus within the theory 3 , 4 . The theory has been extended to relativistic domain 5 – 9 , applied to astrophysical and cosmological scenarios 10 – 13 , and provides a counter-example to the claim that quantum phenomena imply a denial of realism.
In his description of the theory, Bohm pointed out that certain assumptions are necessary to reproduce orthodox quantum mechanics. Further, he opined that these assumptions may need modifications in regimes not yet experimentally accessible, so that the theory may either supersede or depart from orthodox quantum mechanics in the future 3 – 5 , 14 . One of these assumptions is that the initial density of configurations equals the Born rule density. This assumption has been criticised on the grounds that, since there is no logical relation between the initial configuration density and the quantum state in the theory, it is ad hoc 15 , 16 . Bohm was able to show that adding random collisions 14 or random fluid fluctuations 17 to the dynamics of the theory leads to relaxation from an arbitrary density to the Born rule density. Later, Valentini showed that the original dynamics alone is sufficient for relaxation to occur at a coarse grained level 18 , 19 . Numerous computational studies have since been conducted that have furthered our understanding of the relaxation process in various scenarios (see 13 for a review).
However, a simple but important conceptual point has remained largely unnoticed in the literature: if there is no logical relationship between the configuration density and the quantum state in pilot-wave theory, then why should the quantum state be normalizable? In orthodox quantum mechanics, normalizability is necessary as statistical predictions are extracted from the quantum state according to the Born rule. On the other hand, in pilot-wave theory the quantum state serves as a physical field that determines the evolution of the configuration. To extract statistical predictions from the theory, one only needs to define an ensemble with a normalized density of configurations – normalizability of the quantum state is unnecessary. This opens up the possibility of physically interpreting non-normalizable quantum states that occur as solutions to physical constraints in quantum gravity, such as the Kodama state 20 – 22 .
However, to the best of our knowledge, the behaviour of non-normalizable solutions to the Schrodinger equation has not been studied from a pilot-wave perspective. In this article, we make a first step in this direction by studying the non-normalizable solutions of the harmonic-oscillator potential. We choose the harmonic oscillator as it is widely found in nature, and because the normalizability constraint leads to the important discretization of energy levels. The article is structured as follows. We first study the non-normalizable solutions of the harmonic oscillator, using both the analytic approach and the ladder operator approach. We then study the pilot-wave theory of the non-normalizable states. We show that the pilot-wave velocity field for the non-normalizable states at large . We discuss the relaxation behaviour for these states. We then introduce the notion of pilot-wave equilibrium and define the new H -function . We prove an H -theorem applicable to non-normalizable states using a coarse-grained , analogous to the H -theorem for quantum equilibrium. We study the relationship between relaxation to pilot-wave equilibrium and relaxation to quantum equilibrium. Lastly, we discuss the theoretical and experimental implications of our work. In particular, we show that non-normalizable states are unstable in the presence of perturbations and environmental interactions, and thereby give an explanation of quantization in pilot-wave theory. | Discussion
We have discussed some of the implications of our work in the previous section. However, the list of implications is necessarily inexhaustive as the normalizability constraint is ubiquitous in orthodox quantum mechanics. It would, for example, be interesting to study non-normalization solutions to the Schrodinger equation for other systems, say the Hydrogen atom, or to the Dirac equation. An important result of our work is that the non-normalizable harmonic-oscillator solutions are bound states, in the sense that the pilot-wave velocity field at large . It is important to figure out the general conditions under which the pilot-wave velocity field has this behaviour. Another important result is that perturbations and interactions make non-normalizable states unstable, in the sense that the system configuration becomes overwhelmingly likely with time to be in a normalizable branch of the total quantum state. Lastly, it remains unclear how to construct a well-defined basis for such states.
We note that, according to our work, the explanation for quantization given by pilot-wave theory is drastically different from that of quantum mechanics. Quantization in quantum mechanics arises from the axiom of Born rule, whereas in pilot-wave theory quantization is an emergent phenomenon that arises from the instability of non-normalizable states due to perturbations and environmental interactions. In this sense, the status of non-normalizable states in the theory may be said to be analogous to that of non-equilibrium ensembles as ( a ) the conceptual structure of the theory allows the logical possibility of both non-normalizable states and non-equilibrium densities, and ( b ) the theory also possesses the internal logic necessary to explain why we do not observe either of them in present-day laboratories.
We note that the H -theorem does not by itself prove that relaxation to pilot-wave equilibrium occurs, but provides a general mechanism to understand how equilibrium is approached, similar to the status of the generalized H -theorem in classical statistical mechanics 25 . Whether relaxation in fact occurs in finite time, if it is monotonic etc. significantly depend on whether the velocity field yields sufficient mixing. It is well-known in the literature on relaxation in pilot-wave theory 13 , 19 , 34 that the velocity field varies rapidly around nodes (if they exist) and thereby causes efficient relaxation in general. Therefore, future numerical simulations using superpositions of non-normalizable eigenstates can provide evidence whether relaxation to pilot-wave equilibrium indeed occurs, similar to relaxation to quantum equilibrium for normalizable states. It is useful to note here that the boundedness of the solutions ensures that the support does not necessarily become filamentous with time. For example, if is sufficiently large to cover the region around the origin and is very large near its boundary , then will remain effectively static as the radial velocity field will be very small in that region. Lastly, we note that the coarse-graining cells do not become filamentous as they do not evolve with time, unlike the configuration density.
From a historical perspective, we know that the initial conditions of pilot-wave theory have usually been so restricted as to reproduce orthodox quantum mechanics. An important departure was made when nonequilibrium densities were taken seriously in the theory, and the notion of quantum equilibrium was defined 18 , 28 . But the notion of quantum equilibrium is still restrictive as it assumes that a density in equilibrium always reproduces orthodox quantum mechanics. The notion of pilot-wave equilibrium makes one further step, in which this restriction is jettisoned. Therefore, generalising the notion of quantum equilibrium to pilot-wave equilibrium may be seen as a logical step towards treating pilot-wave theory as a theory in its own right, instead of as a hidden-variable reformulation of orthodox quantum mechanics.
It may appear that the restriction of the configuration density to compact supports limits the physical applicability of pilot-wave equilibrium. However, this is incorrect as we can always approximate a density with global support up to arbitrary accuracy using a density with compact support. This can be done by defining an arbitrarily small but finite cut-off parameter so that if the global density at a particular point on the configuration space, we define the compact density , where (up to normalization) at all other . Further, global supports imply arbitrarily small probabilities that cannot be empirically verified and are, therefore, mathematical idealisations. For example, a Hydrogen atom in a lab on Earth has a finite but arbitrarily small probability of being found, in a position measurement, arbitrarily far away from the Earth. But observing such an extremely tiny probability trillions of light years away would take many times more than the current age of the universe in any realistic experimental setup.
There are several implications of our work for pilot-wave theory. First, our work suggests a constraint on the pilot-wave velocity field. We know that the pilot-wave velocity is not uniquely defined as one can always add a divergence-free term to the current. In the context of non-normalizable states, the velocity field plays the important role of determining whether a given state is bounded. Therefore, it seems reasonable to impose the constraint that the addition of divergence-free term to the current does not affect the boundedness of the state. That is, if the (usually defined) pilot-wave velocity field goes to 0 at , then this behaviour must be preserved on modifying . It would be interesting to figure out the class of possible that satisfy this property. Second, our work may help in distinguishing pilot-wave theory from orthodox quantum mechanics and other realist interpretations of quantum mechanics. For example, some authors have claimed that the system configuration in pilot-wave theory is superflous and the theory is actually a many-worlds theory in disguise 35 – 37 . As we have seen, however, the existence of a configuration density in the theory makes it possible to extract statistical predictions from non-normalizable quantum states. Therefore, the interpretation of non-normalizable states may turn out to be a crucial difference between the two theories. Third, we note that the notion of pilot-wave equilibrium, although introduced in the context of non-normalizable quantum states, is equally applicable to normalizable quantum states. It would be of interest to figure out whether densities partially relaxed to quantum equilibrium in previous numerical simulations have in fact relaxed to pilot-wave equilibrium. Lastly, our results imply that a unitary evolution involving non-normalizable states is dynamically equivalent to a corresponding non-unitary evolution involving appropriate normalizable states. This suggests that non-unitary evolution in some applications of orthodox quantum mechanics may in fact be an artefact of insistence on state normalizability. This also implies that, for normalizable states, unitary evolution is not necessary for relaxation to pilot-wave equilibrium.
Our work also has implications for the -ontic versus -epistemic debate 38 – 40 . Non-normalizable quantum states do not make sense from a -epistemic viewpoint, in which the role of the quantum state is to define probabilities. If the existence of non-normalizable quantum states is proved experimentally, or if such states are found to be crucial in fields like quantum cosmology or quantum gravity, then it would be difficult to argue in favour of -epistemicity. We note that, once pilot-wave equilibrium is reached at a coarse-grained level, then the relation on suggests how a -epistemic interpretation may emerge at an effective level from an underlying -ontic theory.
We conclude that pilot-wave theory naturally suggests consideration of the possibility of non-normalizable quantum states, which we have studied for the case of harmonic oscillator. Such states have a physically-meaningful notion of an equilibrium density. We have argued that quantization emerges in pilot-wave theory due to the instability of non-normalizable states to perturbations and environmental interactions. Further work is needed to determine whether such states actually exist in nature. | Non-normalizable states are difficult to interpret in the orthodox quantum formalism but often occur as solutions to physical constraints in quantum gravity. We argue that pilot-wave theory gives a straightforward physical interpretation of non-normalizable quantum states, as the theory requires only a normalized density of configurations to generate statistical predictions. In order to better understand such states, we conduct the first study of non-normalizable solutions of the harmonic oscillator from a pilot-wave perspective. We show that, contrary to intuitions from orthodox quantum mechanics, the non-normalizable eigenstates and their superpositions are bound states in the sense that the velocity field at large . We argue that defining a physically meaningful equilibrium density for such states requires a new notion of equilibrium, named pilot-wave equilibrium, which is a generalisation of the notion of quantum equilibrium. We define a new H -function , and prove that a density in pilot-wave equilibrium minimises , is equivariant, and remains in equilibrium with time. We prove an H -theorem for the coarse-grained , under assumptions similar to those for relaxation to quantum equilibrium. We give an explanation of the emergence of quantization in pilot-wave theory in terms of instability of non-normalizable states due to perturbations and environmental interactions. Lastly, we discuss applications in quantum field theory and quantum gravity, and implications for pilot-wave theory and quantum foundations in general.
Subject terms | Non-normalizable solutions of the harmonic oscillator
We start by noting that several elementary theorems in orthodox quantum mechanics are no longer applicable once the normalizability constraint on quantum state is dropped. In the non-normalizable scenario, eigenstates in one dimension are generally degenerate and complex as relevant theorems on degeneracy and reality of eigenstates no longer apply. Furthermore, a non-normalizable quantum state does not have a Fourier transform, and therefore a momentum representation, in general. This is because Fourier transform exists only if the concerned function does not diverge faster than a polynomial at large values of its argument. Therefore, we are restricted to the position representation of the quantum state in general. This makes sense from a pilot-wave perspective, as the position basis is the preferred basis in the theory. We also note that the momentum operator is in general non-Hermitian in this scenario.
For the harmonic-oscillator potential, the energy eigenvalues are not quantized and can also take negative values in this scenario. Mathematically, the eigenvalues can also be complex in this scenario, but this is not physically meaningful from a pilot-wave perspective. Consider a von-Neumann energy measurement, which leads to apparatus wavefunctions of the form , where E is the energy eigenvalue and g is the strength of interaction between the system and apparatus. The wavefunction is not defined on configuration space if E is complex. Therefore, allowing complex eigenvalues is only possible if one abandons the configuration space as the fundamental arena of pilot-wave theory. Lastly, we restrict the initial wavefunction to only eigenstates and finite superpositions, as the time-evolution operator may not be not well-defined for an arbitrary initial wavefunction 23 . With these facts in mind, let us study the non-normalizable solutions to the harmonic oscillator from a pilot-wave perspective.
The time-independent Schrodinger equation for the harmonic-oscillator potential can be written as where and . The equation is traditionally solved by using the ansatz . Substituting the ansatz into Eq. ( 1 ), we get Equation ( 2 ) is known as the Hermite differential equation. It contains both normalizable and non-normalizable solutions to ( 1 ). Using the Frobenius method, the general solution to ( 2 ) can be written as where and are two arbitrary complex constants and the recurrence relation between ’s can be obtained to be . It is useful for us to rewrite Eq. ( 3 ) as where and . Clearly, the term consists only of even powers of y , whereas the term consists only of odd powers.
It is useful to note that , can be expressed in closed form as follows: where is the confluent hypergeometric function of the first kind and is the Pochhammer symbol.
The general solution to the time-independent Schrodinger Eq. ( 1 ) can be written as where and . Equation ( 10 ) is a valid solution to the Schrodinger Eq. ( 1 ) for all (real) values of K . It can be shown that the series ( ) terminates only if for an even (odd) n . In that case, ( ) has a dependence at large and is normalizable. If for an even (odd) n , then ( ) has a dependence at large and is non-normalizable.
The complex coefficients , contain a total of 4 real parameters. We can eliminate 2 of the parameters by a) normalizing the coefficients so that (note that the quantum state is itself non normalizable in general) and b) eliminating the global phase. Both steps a ) and b ) make sense from a pilot-wave theory perspective as the pilot-wave velocity field , where j ( y ) is the quantum probability current (see Eq. ( 17 ) below), does not depend on the global magnitude or the global phase of the quantum state. That is, a transformation of the form , where is a complex constant, does not change v ( y ). Therefore, we may further simplify Eq. ( 10 ) to where , , and , . In this form, it is clear that and act as basis vectors of the doubly degenerate subspace corresponding to K . We note that, in orthodox quantum mechanics, steps (a) and (b) are justified (for normalizable states) on the grounds that is a probability density. Clearly, cannot be interpreted as a probability density in our case but a) , b) are still valid from a pilot-wave perspective.
We can connect the general solution ( 11 ) to the allowed solutions in orthodox quantum mechanics as follows. We know that the allowed energy levels in orthodox quantum mechanics are given by , where n is a non-negative integer. Furthermore, we know from the preceding discussion that for all even n , is normalizable and is non-normalizable. Similarly, for odd n , is normalizable and is non-normalizable. Therefore, where is the harmonic-oscillator eigenstate in orthodox quantum mechanics, and is the relevant normalization constant.
Let us consider a superposition of eigenstates corresponding to different values of K . Suppose . As before, we normalize the coefficients ( ) and eliminate the global phase of , as the velocity field is unaffected by these changes. We also know, from the time-dependent Schrodinger equation, that will evolve as Lastly, it is straightforward to extend the discussion to a system of N particles, each in a harmonic oscillator potential. Consider the quantum state We normalize the coefficients and eliminate the global phase of . The time evolution of can be easily calculated by the time-dependent Schrodinger equation. We discuss the action of ladder operators on non-normalizable states in the Supplementary Information .
Bound-state interpretation of non-normalizable harmonic oscillator states
In pilot-wave theory, the quantum state serves to define the velocity field for the evolution of the system configuration. This can be a configuration of particles, as in pilot-wave theory of non-relativistic quantum mechanics, or a configuration of fields, as in pilot-wave theory of quantum field theory. Let us consider a system of N particles in the harmonic oscillator potential with the quantum state ( 14 ). Without loss of generality, we suppose that all the particles have the same mass m for simplicity. The time-dependent Schrodinger equation implies the continuity equation where is a point on the configuration space, and the current is defined in terms of and which is the complex conjugate of . From Eq. ( 15 ), the quantity is defined as the pilot-wave velocity field. Let us consider an ensemble of the N -particle harmonic oscillator systems. As there is no a priori relationship between the quantum state and the configuration density in pilot-wave theory, we can define an initial normalized density for the ensemble. Equation ( 17 ) supplies the velocity field to evolve : Clearly, experimental probabilities are well-defined as is normalized. However, there remains the question whether the velocity field ( 17 ) behaves physically for non-normalizable states. One example of an unphysical behaviour would be if increases with as ( ) for . In that case, the system configuration will escape to in finite time. In orthodox quantum mechanics, we know that such behaviour cannot occur as the normalizability constraint ensures that the probability density as . For this reason, the normalizable states are referred to as bound states in orthodox quantum mechanics.
We can straightforwardly generalise the definition of bound state to the non-normalizable scenario: if the velocity field ( 17 ) defined by is such that in the limit for all , then is a bound state. Such a velocity field ensures that any initial normalized configuration density will evolve to such that as for all . That is, the system configuration remains bounded at all (finite) times.
Below, we prove that the non-normalizable solutions of the harmonic oscillator are bound states in this sense. We begin with the simplest case, that of an eigenstate in one dimension.
Velocity field of an eigenstate in one-dimension
Let us consider the velocity field of a harmonic oscillator eigenstate . We know from orthodox quantum mechanics that the normalizable eigenstates defined by ( 12 ) are real. This implies that, for these states, the velocity field is zero everywhere and the particle is stationary. However, is complex in general. This implies that the velocity field for non-normalizable eigenstates is non-zero in general. Let us then calculate this velocity field.
We first note the general result that, if is an eigenstate of the Hamiltonian, then In orthodox quantum mechanics, as as . In our case, on the other hand, as so that the left-hand side of Eq. ( 19 ) becomes indeterminate at . However, it is convenient to evaluate the left-hand side of ( 19 ) for at . This is because the following readily verifiable calculations imply that so that the current j ( y ) is constant and independent of K .
Therefore, the velocity field is where, in Eq. ( 28 ), we have used and ( 24 ).
Let us discuss the velocity field ( 28 ). First, Eq. ( 28 ) tells us that, for an eigenstate corresponding to K , the velocity field is constant with time. Second, it tells us that the velocity field depends on the angles , , so that degenerate eigenstates corresponding to the same K will, in general, have velocity fields that are different but proportional to each other at every y . Third, the velocity field does not change sign with y . Fourth, we note that the velocity field for an eigenstate corresponding to ( ) has no apparent connection with the velocity field for an eigenstate corresponding to . Lastly, and most importantly, Eq. ( 28 ) tells us that the velocity fields are inversely proportional to . This implies that, for as we know that diverges like at large . Therefore, the velocity field decreases very quickly to 0 as becomes large at (see Fig. 1 ). This implies that is a bound state , according to our definition, although it is non-normalizable. This is a surprising behaviour from the viewpoint of orthodox quantum mechanics, as a naive application of the Born rule would imply an infinitely large probability of the particle being found at large .
Velocity field of a superposition of eigenstates
Let us consider a quantum state that is a superposition of eigenstates corresponding to various K ’s. We know from Eq. ( 17 ) that the velocity field is To study the asymptotic behaviour of ( 30 ) as , we first need an asymptotic expression for as . We derive such an expression in the supplementary material, using the approach given in ref. 24 .
Asymptotic behaviour of the velocity field
Using the expansion , we can express the current as Using the asymptotic form derived in the Supplementary Information, we write at large , Eq. ( 32 ) becomes where we have retained only the leading order of y . Similarly, we can prove that Therefore, the velocity field Equation ( 35 ) implies that (see Fig. 2 ). Therefore, a superposition of eigenstates corresponding to different K ’s is a bound state. Let us proceed next to the case of multiple particles.
Velocity field for multiple particles
We want to check whether the asymptotic behaviour of the velocity field discussed in the previous subsections also hold in the case of multiple particles, each in a harmonic oscillator potential. Consider an N -particle quantum state where is an eigenstate of the g-th particle corresponding to the eigenvalue in the j-th term of the superposition. We know that the current in the r-th direction is Similar to the previous subsection, we can express at large , and then simplify ( 37 ) as On the other hand, which implies that Equation ( 40 ) confirms that the velocity field is such that as . Therefore, the system configuration remains bounded at all times and is a bound state (see Fig. 3 ).
Relaxation to equilibrium
In pilot-wave theory for normalizable quantum states, it is well known that an arbitrary initial density of configurations relaxes to the Born rule density (called the equilibrium density) at a coarse-grained level, subject to standard statistical mechanical assumptions 13 , 18 , 19 . In this section, we look at whether such a relaxation occurs to a well-defined equilibrium density when is non-normalizable.
Pilot-wave equilibrium: a generalisation of quantum equilibrium
Consider an ensemble of systems described by a non-normalizable quantum state with a normalized density of configurations . We want to understand if a physically-meaningful equilibrium density can be defined for the ensemble. In the case of normalizable quantum states, we know that the equilibrium density satisfies the following conditions: Entropy maximization: The equilibrium density minimises an appropriately defined H -function (the negative of which is maximised). Equilibrium stability: The equilibrium density continues to be in equilibrium with time. Equivariance: The functional form of the equilibrium density in terms of the quantum state is preserved with time. Quantum-mechanical equivalence: The statistical predictions made by the equilibrium density is equal to that predicted by orthodox quantum mechanics for the same quantum state. Let us check whether these conditions can be met in our scenario. Consider the first condition: we typically seek a density that minimises the H -function 18 where the integral is defined over all of configuration space and is the set of all reals. Equation ( 41 ) immediately lands us in trouble as it is formally the relative entropy from to – but , being non normalizable, is not a probability density over . Therefore, is not a mathematically well-defined relative entropy.
Fortunately, it is straightforward to rectify the definition of H for our scenario. We note that, in general, the density may have support only over a proper subset of . Let us assume that is a proper subset of , that is, has a compact support. We can then treat as a probability density over once appropriately normalized. We define a candidate equilibrium density where . We then replace by Note that, since is a valid probability density over , is a well-defined relative entropy from to . Equation ( 43 ) can be written as so that the integrand is always non-negative, which implies that the lower bound , which is achieved when . Therefore, the newly-defined quantities and together satisfy the first condition set out at the beginning of the subsection.
Let us next consider the second condition: does the initial density evolve to a that minimises ? We know that 14 , since both and satisfy the same continuity equation, we have where . Equation ( 45 ) implies that, given an initial density , we have where is the support of . We note that Eq. ( 46 ) implies The time-dependent H -function remains constant at its lower bound for the density . Thus, an initial density that minimises will evolve in time so as to minimise at all times.
The third condition, of equivariance, is not directly met as the support is not determined by the quantum state. However, it is clear from ( 46 ) that the functional form of in terms of over is invariant with time. We may therefore define the following condition to be pilot-wave invariance: the functional form of the density in terms of the quantum state over its support is invariant with time. Pilot-wave invariance is motivated by the notion of equivariance, and reduces to it in the special case that is normalizable and .
Is the fourth condition also met? This condition ceases to make sense in our case, as we are dealing with quantum states that are non-normalizable. Such states are considered unphysical in orthodox quantum mechanics, and the theory provides no experimental probabilities for ensembles with such states. In view of the fact that conditions 1, 2 and 3 (suitably modified) are satisfied, and condition 4 is inapplicable, we may define a density that satisfies only the first three conditions to be in pilot-wave equilibrium (as opposed to quantum equilibrium). The terminology makes explicit the fact that quantifies relaxation to an equilibrium density in pilot-wave theory regardless of whether that density reproduces orthodox quantum mechanics, whereas quantifies relaxation to the equilibrium density that reproduces orthodox quantum mechanics. For normalizable states, the notion of pilot-wave equilibrium reduces to quantum equilibrium for the special case when .
To conclude, we define a density with support to be in pilot-wave equilibrium if and only if Clearly, there are infinitely many that can be in pilot-wave equilibrium, as there are infinitely many subsets of . The density minimises the H -function at all times. If does not satisfy condition ( 49 ), then we define it to be in pilot-wave nonequilibrium. Note that a rescaling , where is a complex constant, does not change the equilibrium condition ( 49 ), similar to the definition of the velocity field ( 17 ). Lastly, we also note that although the concept of pilot-wave equilibrium has been motivated by a consideration of non-normalizable quantum states, it is applicable to normalizable quantum states as well.
H -theorem for relaxation to pilot-wave equilibrium
We now turn to the question whether an arbitrary ensemble density will relax to pilot-wave equilibrium at a coarse-grained level, analogous to relaxation to quantum equilibrium for normalizable states. We show this is indeed the case by proving an H -theorem for .
In the proof for relaxation to classical statistical equilibrium 25 or quantum equilibrium 18 , an important role is played by the fact that the exact H -function is constant with time. To build an analogous H -theorem for pilot-wave equilibrium, our first task then, is to ascertain if is constant with time. From Eqs. ( 41 ), ( 42 ) and ( 43 ), the relationship between the two H -functions is Clearly, it is sufficient to prove the constancy of to prove that is constant with time. We know, from Eq. ( 47 ), that is constant with time if the initial density is in pilot-wave equilibrium. Let us consider an arbitrary initial density with support in pilot-wave nonequilibrium, piloted by a non-normalizable state . We also consider the pilot-wave equilibrium density over , where . As both and are piloted by , they will obey similar continuity equations where is determined by according to ( 17 ). The velocity field provides the mapping from . We also know from Eq. ( 46 ) that Therefore, the quantity is in fact constant with time, and we can label it by . This implies that an arbitrary initial density with defined over a region of low (high) will ‘shrink’ (‘expand’) if it moves to a region of high (low) . Lastly, Eqs. ( 51 ) and ( 55 ) imply that We are now ready to prove the subquantum H -theorem for . We first subdivide the configuration space into small cells of volume . We then define the coarse-grained quantities where the integral is performed over the cell which contains . Clearly, and are constant in each cell. We define the quantity and its coarse-grained version if , where of . Subtracting ( 53 ) from ( 52 ) and using the definition of , we have which is analogous to Eq. ( 45 ). We define the coarse-grained version of to be Analogous to the H -theorems for classical statistical equilibrium 25 and for quantum equilibrium 18 , we assume that there is no initial fine-grained structure, that is, Let us consider Using the initial conditions ( 63 ) and ( 64 ), and the fact that is constant with time, we can simplify the first term in RHS of ( 65 ) as The second term in RHS of ( 65 ) can be written as where the integral over has been broken up into integrals over each cell of volume . As and are constant over these cells, we can write , and if belongs to the cell. It then follows that where, in Eq. ( 70 ), we have used the relation ( 63 ). Using ( 67 ) and ( 71 ), we can rewrite ( 65 ) as We note that Using ( 77 ), we can rewrite Eq. ( 73 ) as Using the identity for all real x , y , it is then clear from Eq. ( 78 ) that . We have, therefore, proven an H -theorem for , subject to assumptions similar to those assumed for relaxation to quantum equilibrium.
Relationship between relaxation to pilot-wave equilibrium and to quantum equilibrium
Although the H -theorem for gives the theoretical basis for relaxation to pilot-wave equilibrium, we need numerical evidence to determine whether relaxation in fact occurs. There exists a large body of results in the literature on the numerical evidence for relaxation to quantum equilibrium for normalizable states. It is, therefore, of interest to understand the relation between relaxation to pilot-wave equilibrium for non-normalizable states and relaxation to quantum equilibrium for normalizable states, if any.
We begin by noting that Eq. ( 58 ) can be written as where and . From Eqs. ( 61 ) and ( 80 ), we can then derive where It is clear from ( 81 ) that the lower bound of is , corresponding to pilot-wave equilibrium . The relationship ( 81 ) implies that a study of the behaviour of is equivalent to that of . It now remains to recast this study in terms of normalizable states.
Consider the non-normalizable quantum state from Eq. ( 36 ). We know that the velocity field at large . Suppose a number L sufficiently large such that is very small at , then an initial distribution localised in the region cannot escape to for an arbitrarily long time (depending on the value of L chosen). This implies that we effectively need only for to know how evolves in the direction. We can utilise this feature of the velocity field to define a normalizable quantum state with the same velocity field in the region as that of the non-normalizable quantum state.
Let us define the normalizable quantum state where is the Heaviside-step function, m is a positive integer and L is a very large constant such that is very small at for all . We know that is normalizable as at large for all . Clearly, we can replace by to evolve if has an initial support . The evolution of itself is non-unitary as . This is because is numerically, but not functionally , equal to in the subset . Therefore, we can study relaxation to pilot-wave equilibrium using normalizable states, but doing so would require non-unitary dynamics. A complete relaxation to pilot-wave equilibrium would correspond to a partial relaxation to quantum equilibrium (see Fig. 4 ).
Theoretical and experimental implications
In this section, we sketch the theoretical and experimental implications of our work. Although we have focused on the harmonic oscillator, the general approach adopted in this paper and the notion of pilot-wave equilibrium introduced are not exclusive to the harmonic oscillator. Therefore, where applicable, we discuss the implications in the broader context of non-normalizable quantum states with a normalized density of configurations.
Non-relativistic quantum theory
Experimental observation of continuous-energy eigenstates
We have seen that pilot-wave theory gives a physical interpretation for non-normalizable harmonic oscillator states as bound states. However, such states have continuous energies and have never been experimentally observed. Does this directly falsify pilot-wave theory in favour of orthodox quantum mechanics?
We first note that unitarity imposes restrictions on preparation of non-normalizable states in a laboratory. This is because, if the initial joint quantum state of the preparation apparatus (including all the atoms of all the equipments etc.) is normalizable, then the joint quantum state will remain normalizable after the preparation is completed. The argument can be repeated to conclude that non-normalizable states can be potentially detected today only if there existed non-normalizable states in the early universe.
Consider an atom in the early universe in a non-normalizable eigenstate , where K is continuous. The atom will, in general, be subject to small perturbations across the universe. It can be shown, from time-dependent perturbation theory, that the quantum state will evolve as up to first order in , where is the unperturbed Hamiltonian of the atom. Note that, as the Dyson series does not assume state normalizability 26 , Eq. ( 84 ) is valid for . Let us consider realistic perturbations that are small and localised in space. That is, suppose the perturbations are of the approximate form so that they rapidly fall off around . Then, using the fact that is an eigenstate, we can write the integrand in ( 84 ) as as is square integrable (although is not) and can be expanded in terms of the normalizable eigenstates of . Note that a perturbation arbitrarily distant from the atom is sufficient to make square integrable, given that falls off rapidly. Therefore, for realistic perturbations Eq. ( 84 ) becomes so that the quantum state becomes a superposition of the non-normalizable and the normalizable ’s. If the atom now interacts strongly with the environment to cause an effective energy-measurement, then the possible eigenvalues are the discrete energies as well as the continuous energy . Using the von-Neumann measurement 27 Hamiltonian , we can represent the combined state of the atom and an idealised pointer variable after such a measurement to be where g is the interaction constant, is the pointer state, and is used to represent both and in the superposition ( 87 ). The probabilities will not be given by the Born rule as is non-normalizable, but will have to be computed from the normalized probability density . Note that decoherence will effectively occur as long as the pointer wavefunction is normalizable. Further interactions with macroscopic bodies will cause further decoherence 4 , so that the measurement will be effectively irreversible as for normalizable quantum states.
Therefore the atom, on account of perturbations and interactions with environment, may transition to a normalizable energy eigenstate. In that case, the total quantum state will remain non normalizable but the system configuration will enter an effectively-decohered normalizable branch. After N such measurements, the fraction that remains in the non-normalizable branch will be given by where the fraction lost to the normalizable branches in the j-th measurement is labelled by . Clearly, as unless where is some positive integer. The condition is possible if the initial density, the initial joint quantum state of the atom and the idealised measurement apparatus, and the perturbations are so finely tuned that the configuration density remains completely in the non-normalizable branch for all . Without such fine tuning, the probability of the atom remaining in becomes tiny after a sufficiently long time corresponding to a large N . Note the key role played by perturbations here as they continuously add superpositions of normalizable eigenstates to the total quantum state. Therefore, we would not in general expect non-normalizable states in the early universe to have survived to the present time. Further technical work is required to ascertain the survival timescales for various non-normalizable states and perturbations.
Signalling and pilot-wave equilibrium
We know that no-signalling is generally violated in quantum nonequilibrium 28 . Given that quantum equilibrium (when applicable) is a special case of pilot-wave equilibrium, it is of interest to understand the signalling behaviour of ensembles in pilot-wave equilibrium. This is important to understand whether non-normalizable states in pilot-wave equilibrium are no-signalling. Below, we show that no-signalling is violated generally in pilot-wave equilibrium.
Consider an initial two-particle entangled quantum state , where the two particles are located in space-like separated wings. Suppose an initial density with the support where are very small. Then . The density on is in pilot-wave equilibrium by definition.
Suppose evolves under the Hamiltonian . The question is whether the marginal density of is affected by the distant local Hamiltonian under the control of the experimenter at the second wing. We know that, since is entangled, the velocity of the first particle will depend on and thereby on . Furthermore, in the limit , and the initial marginal density of the first particle will be . It is then clear that, since depends on and contains only the point , will depend on . The statistics of a position measurement performed at the first wing at time t will then depend on the Hamiltonian chosen by the experimenter at the second wing. We conclude that, in general, correlations generated by an ensemble in pilot-wave equilibrium are signalling, unless the ensemble is also in quantum equilibrium. As there is no notion of quantum equilibrium for non-normalizable states, we conclude that non-normalizable states generate signalling correlations in general.
Quantum field theory
We know that quantum fields can often be treated as a collection of harmonic oscillators 29 . For illustration, let us consider the pilot-wave treatment 30 of a free, massless real scalar field on a flat expanding space-time, with the Lagrangian density , where is the scale factor and for simplicity. The functional Schrodinger equation for this system is where is the quantum state defined over the configuration space , are real variables related to the Fourier-transform of by and is the canonical momentum. Here V is the box-normalization volume. Note that Eq. ( 90 ) assumes a regularization so that a finite (but arbitrarily large) number of can be considered.
Equation ( 90 ) clearly shows that can be treated as a collection of independent harmonic oscillators in the Fourier space. Notably, although the field is assumed to have a Fourier-transform, we need not make the same assumption about which is piloting . Therefore, we can consider the non-normalizable solutions to ( 90 ) explored in this paper. Such solutions may have implications in cosmological settings 13 , 30 .
Quantum gravity
It is well known that non-normalizable quantum states are often encountered in quantum gravity 21 , 22 , 31 . Such states are also encountered when pilot-wave dynamics is formulated on shape space, where a different approach to the problem of non-normalizability from a pilot-wave perspective has been explored 32 . Recently, Valentini has argued for a pilot-wave approach to quantum gravity where statistical predictions are derived from a normalized configuration density 33 . This is close to the approach adopted in our work, but there are several important differences. It is useful to discuss the implications of our work for quantum gravity in the context of ref. 33 .
First, ref. 33 argues that there is no physical equilibrium density for non-normalizable quantum states, on the basis that the lower bound of diverges to . However, this argument has multiple flaws. Firstly, the lower bound of diverges only in the particular case where the support of the configuration density is the entire configuration space , that is . For all other cases the lower bound of is , as can be seen from Eq. ( 51 ). Secondly, we have argued that, for non-normalizable quantum states, the notion of quantum equilibrium must be replaced by the more general notion of pilot-wave equilibrium. Correspondingly, must be replaced by to define a physical equilibrium density. Therefore, our results imply that some form of the Born rule arises as a physical equilibrium density for non-normalizable states.
Second, ref. 33 has emphasised that non-normalizability of the quantum state is due to the “deep physical reason” that the Wheeler-DeWitt equation on configuration space has a Klein-Gordon-like structure. In our approach on the other hand, there is no special role played by the structure of any particular equation. We have argued that non-normalizability is intrinsic to pilot-wave theory – only a normalized configuration density is needed to obtain statistical predictions. The quantum state, which defines the evolution of the configuration, need not be normalizable. Therefore, non-normalizable quantum states naturally follow from the first principles of the theory and the structure of the Wheeler-Dewitt equation can only play a technical role. This implies that non-normalizable solutions to the Schrodinger equation or Dirac equation are as valid from a pilot-wave perspective, where applicable, as that to the Wheeler-DeWitt equation.
Supplementary Information
| Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-023-50814-w.
Acknowledgements
I am thankful to Matt Leifer for encouragement and several helpful discussions. I am also thankful to Siddhant Das and Tathagata Karmakar for helpful discussions. The author was supported by a fellowship from the Grand Challenges Initiative at Chapman University.
Author contributions
I.S. is the sole author.
Data availability
The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.
Competing interests
The author declares no competing interests. | CC BY | no | 2024-01-15 23:41:59 | Sci Rep. 2024 Jan 13; 14:669 | oa_package/dc/54/PMC10787783.tar.gz |
|||
PMC10787784 | 38218738 | Introduction
Urinary Tract Infections (UTIs) are one of the most common bacterial infections in older adults, constituting around 25% of all infections 1 – 5 . Clinical presentation ranges from self-limited illness to severe sepsis. UTIs account for ~9–31% of cases of severe sepsis which itself has an estimated mortality of 20–40% 4 , 6 – 9 . To differentiate between asymptomatic bacteriuria and UTIs, clinicians rely on positive findings of bacteriuria and genitourinary symptoms. Diagnosis is further complicated by the presence of cognitive impairment or dementia since People Living with Dementia (PLWD) may find it challenging to report their symptoms, and this could result in further complications 10 – 12 . As a result, acute infections might not be diagnosed until symptoms require hospitalisation 13 . In the United Kingdom, over 20% of hospital beds are occupied by PLWD, with 9% of these attributed to UTIs 14 – 17 .
Currently, a urine sample test and acute changes in baseline cognition are used to diagnose UTIs in PLWD 18 . However, samples can be difficult to obtain due to urinary incontinence, cognitive impairment, sample contamination or previous use of antibiotics 19 and are taken on suspicion of an infection, which may be delayed. Additionally, although they can be used as rapid detectors, dipstick tests have a high false positive rate for older adults and require action from the PLWD or their carer which limits their effectiveness for diagnosis 3 , 20 . Highlighting UTI risk by identifying early symptoms would allow for prompt diagnosis, improved health outcomes and effective allocation of healthcare resources.
Machine Learning (ML) offers opportunities for clinical diagnosis and decision-support and recent advances show promise for development of advanced predictive models that incorporate patient data to improve diagnostic performance. For UTI detection in PLWD, ML can improve diagnostic performance and timeliness. Several investigations have been conducted for UTI risk prediction on younger adult populations 21 , 22 , which do not generalise to older adults. Existing methods developed for older adults also rely on typical symptoms as predictor variables, precluding their use in community-dwelling patients with dementia with atypical clinical manifestations and who may struggle expressing symptoms 23 . In parallel, low-cost monitoring devices have been developed to offer complementary solutions to the typical diagnostic criteria 24 , 25 . Rantz et al. 26 use activity data collected from in-home Passive Infra-Red (PIR) sensors to detect UTIs in older adults. However, their work is limited to 37 participants and does not utilise ML techniques. The study is also limited to the use of activity data and does not utilise physiological measurements. Work by Enshaeifar et al. 17 employed an unsupervised approach to predict UTIs based on in-home sensors and physiological measurements, however their work showed insufficient diagnostic performance and required the participant to record their own physiology measurements twice a day.
This study presents a machine learning application to identifying the risk of UTI events in PLWD by analysing symptom-targeted features, engineered from continuous in-home activity and physiology data collected by low-cost and passive sensors (Fig. 1 presents an overview). Then, through optimisation and consultations with clinicians, we determine thresholds for the stratification of the risk scores to improve the algorithm’s clinical applicability. The proposed approach has been evaluated in an observational clinical study consisting of 117 participants living with dementia within their own homes. We have worked closely with healthcare professionals to implement a reliable and non-intrusive UTI risk model. Our work will (1) aid clinicians in the early diagnosis of UTIs, and (2) enable a better understanding of in-home behaviour at the point of clinical decision-making. The use of high-resolution in-home observation and measurement data in conjunction with machine learning methods result in timely interventions that can have a significant impact on reducing preventable and unplanned hospital admissions in dementia patients. Such a tool allows for precise collection of urine samples for culture analysis, improved clinical outcomes, a reduction in the burden on healthcare services, and decreased antibiotic overuse and misuse in PLWD by reducing UTI detection time and providing practitioners with more complete pictures of their patients. | Methods
Study design and population
This study was performed in collaboration with Imperial College London and Surrey and Borders Partnership NHS Trust. Participants were recruited from the following: (1) health and social care partners within the primary care network and community NHS trusts, (2) urgent and acute care services within the NHS, (3) social services who oversee sheltered and extra care sheltered housing schemes, (4) NHS Community Mental Health Teams for older adults (CMHT-OP), and (5) specialist memory services at Surrey and Borders Partnership NHS Foundation Trust. All participants provided written informed consent. Capacity to consent was assessed according to Good Clinical Practice, as detailed in the Research Governance Framework for Health and Social Care (Department of Health 2005) and the Mental Capacity Act 2005. Participants were provided with a Participant Information Sheet (PIS) that includes information on how the study used their personal data collected in accordance with the GDPR requirements. If the participant was deemed to lack capacity, a personal or professional consultee was sought to provide written consent to the study. Additionally, capacity of both the participant and study partner is assessed at each research visit. Research staff conducting the assessment have completed the NIHR GCP training and Valid Informed Consent training. If a participant is deemed to lack capacity but is willing to take part in the research, a personal consultee is sought in the first instance to sign a declaration of consent. If no personal consultee can be found, a professional consultee, such as a key worker, is sought. This process is included in the study protocol and ethical panel approval is obtained.
Eligible study participants included adults >50 years with a clinically ascertained diagnosis of dementia or mild cognitive impairment and current or previous treatment at a psychiatric unit. Participants lacking capacity for informed consent were required to have a partner or caregiver who had known them for at least 6 months and was able to attend research assessments with them. Exclusion criteria were as follows: (1) patients receiving treatment for terminal illness (2) presence of severe mental health conditions including depression, anxiety, psychosis, and agitation (3) presence of active suicidal thoughts. In total, 117 participants were selected for participation using the above-mentioned recruitment process.
The cohort characteristics can be seen in Table 2 and a patient disposition is available in Supplementary Information Section 1 .
Data collection and definition of outcome
Demographic data was collected during the baseline assessment, whilst psychometric scales were used to collect various physical and cognitive data during regular visits. In-home observation and measurement data was obtained using low-cost off-the-shelf monitoring technologies, including PIR sensors (for measuring activity) and sleep monitoring devices. Figure 1 presents cohort-wide sleep and activity activations, and differences in sleep and activity for a participant with both UTI positive and negative days. PIR sensors can detect motion within 9 metres and with a maximum angle of 45 ∘ and the sleep mat device can monitor breathing rates, heart rates, and sleep states. For an illustration of the layout of sensors see Supplementary Information Section 2 .
Urine samples were collected from several enrolled participants to be labelled by clinicians. Additionally, a baseline algorithm developed in our previous work 40 suggested patients to the study monitoring team to check for additional symptoms of UTIs and arrange a sample collection and refer to the GP if needed. Once samples were collected, a urine sample analysis was performed and the results sent to clinicians, who with information from the monitoring team, determine a UTI. In total, we have 258 labelled urine samples from 64 participants, of which 81 were confirmed positive UTI cases. If a single day has been labelled, we assume the preceding and proceeding 3 days would also be labelled the same (see Supplementary Information Section 6 ). This extends the number of labelled days of data to 1752, consisting of 534 positives and 1218 negatives. For our experimentation, we used data collected between 2021/06/28 and 2022/12/01. The models were trained to predict whether a participant had a UTI on a given day (24 h time window). The distribution of labels can be found in Supplementary Information Section 3 .
Data pre-processing and feature selection
In addition to sensor readings, we performed feature engineering inspired by well-known symptoms of UTIs such as incontinence, urgency and increased frequency of urination, and behavioural changes ( https://www.nhs.uk/conditions/urinary-tract-infections-utis/ ) to allow clinical interpretability and improve model performance and generalisability.
Raw features were: (1) frequency of bathroom, bedroom, hallway, kitchen, lounge activations; (2) mean and standard deviation of nocturnal heart rate and respiratory rate; (3) nocturnal awake occurrences. Engineered features were: (4) bathroom day and nocturnal frequencies, moving average, and percentage change; (5) mean and standard deviation of the movement time from any location within the house to bathroom; (6) daily entropy in PIR sensor activation; (7) number of previous UTIs to date. More information on the features selected can be found in Supplementary Information Section 4 .
Data collection occurred outside controlled environments using in-home devices so missing measurements inevitably occurred. To limit the effects of incomplete data 41 , we imputed missing values based on strategies depending on the given features (see Supplementary Information Section 5 for more information).
Analysis platform
All analyses were performed on a secure computing environment at Imperial College London using Python version 3.9. The Pandas 42 , Numpy 43 , Scikit-Learn 44 , and Pytorch 45 packages formed much of our pipeline.
Methodology
To ensure generalisability, we evaluated our work in two different ways.
Firstly, the dataset was split temporally into training and testing subsets in an 80:20 ratio. The data collected from 2021/06/28 to 2022/10/05 represented 80% ( n = 1394 days in total, from p = 54 participants) of the dataset, whilst the data between 2022/10/05 and 2022/12/01 represented 20% ( n = 358, p = 39).
This formed the first analysis, evaluating the model at making predictions on future data from the same cohort as it was trained on. We will refer to this experimental setting as “Date Split".
In the second analysis, we used a leave-one-out cross-validation strategy 46 . Here, data was split in the same way as in the first evaluation method. Then, training and testing of our machine learning models was performed using a leave-one-out strategy on data from each of the PLWD. This way, we are able to test the model performance on data from participants outside of the cohort it has been trained on. We will refer to this experimental setting as “Date-ID Split".
During model development and whilst optimising model parameters, validation sets were produced by splitting the training data on the date 2022/09/11. All experiments were performed multiple times, with each run using a bootstrap sample 46 of the training set to ensure reproducibility. See Supplementary Information Section 8 for a visualisation of this evaluation.
We used sensitivity, specificity, and area under the precision-recall curve to measure model performance (for definitions of metrics, please see Supplementary Information Section 7 ).
Model development
We tested Logistic Regression (LR), Extreme Gradient Boosting Decision Tree (XGBoost) 27 , Multilayer Perceptron (MLP), Self-Attention 28 , Random Forest (RF) 29 and Naive Bayes (NB) models at predicting the risk of UTIs. Hyper-parameters were tuned using Bayesian optimisation on train-validation splits, with the model producing the highest area under the precision-recall curve (on validation data) selected for the final analysis. The number of days of data used as input to the model was jointly tested, ranging from 1 day to 7 days. Supplementary Information Section 9 contains information on the decisions made regarding each step of the UTI model pipeline.
Stratification of risk scores for clinical reporting
Risk scores from the model are stratified into three groups, used to inform clinical decisions in a concise way and provide precise control over the number of actionable alerts. Outputs are split into the groups Green, Amber, and Red; referring to minimal, medium, and high risk of a UTI respectively. By varying these thresholds, we can balance levels of sensitivity and specificity for the different groups with the number of alerts. This allows our process to be flexible to different clinical scenarios and resources.
Within this work, the optimal thresholds used in our analysis of results were calculated using the algorithm’s predictions on the data collected between 2022/09/11 and 2022/10/05 (validation data), and with feedback from a clinical team. More information on the risk stratification is included in Supplementary Information Section 12 .
Ethics approval
The study received ethical approval from the London-Surrey Borders Research Ethics Committee; TIHM 1.5 REC: 19/LO/0102. The study is registered with National Institute for Health and Care Research (NIHR) in the United Kingdom under Integrated Research Application System (IRAS) registration number 257561.
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article. | Results
Model performance
We examined Logistic Regression (LR), Extreme Gradient Boosting Decision Trees (XGBoost) 27 , Multilayer Perceptron (MLP), Self-Attention 28 , Random Forest (RF) 29 , and Naive Bayes (NB) in their effectiveness to predict UTI events and found the best-performing classification model was LR with L2 regularisation, acting over 3 consecutive days of data. Table 1 presents this model performance on the different data splits. Results from the other models are included in Supplementary Information Section 10 . Analysis of model reliability and calibration can be seen in Supplementary Information Section 11 .
Risk stratification
To improve flexibility of the model to varying clinical settings, we calculate stratified risk scores as discussed in Section: Stratification of Risk Scores for Clinical Reporting. Figure 2 shows the sensitivity and specificity that can be achieved on the validation set by stratifying the results. By varying the stratification thresholds, sensitivity and specificity can be balanced with the number of people given Green and Red alerts. In Supplementary Information Section 12.1 , we present the performance variations when jointly changing the Red and Green thresholds. Here, we select thresholds , and (following interval notation) for groups Green, Amber, and Red respectively. Table 1 shows the results of grouping the risk predictions on the Red and Green groups.
Early detection
We evaluated the model’s utility in correctly estimating the risk of UTIs prior to the recorded clinical urine tests. Figure 3 demonstrates specificity, sensitivity and the area under precision-recall curve for days prior to the recorded UTI events. This shows that 2 days prior to a sample test, our model achieved a sensitivity of 64.4 (95% CI = 61.1–67.8), specificity of 68.9 (95% CI = 66.8–71.0), and area under the precision-recall curve of 64.5 (95% CI = 63.0–66.0), and 4 days prior, a sensitivity of 64.4 (95% CI = 61.1–67.8), specificity of 71.9 (95% CI = 67.9–75.8), and area under the precision-recall curve of 65.4 (95% CI = 60.8–70.0).
Feature importance
The most important features influencing predictions were identified using SHapley Additive exPlanations (SHAP) 30 , a method for producing explainable predictions and calculating contributions from individual features to risk scores. The results of this, on the test set, can be seen in Fig. 4 a and reveal that the number of previous confirmed UTI events, the standard deviation of the nocturnal respiratory rate, the nocturnal average heart rate, and the number of nocturnal awake states were positively correlated with a higher risk score. We can also breakdown single predictions to understand contributions to a risk score, as show in Fig. 4 b. Further examples can be seen in Supplementary Information Section 13 .
Frequency of generated alerts
To understand the requirements of our model in a clinical setting, we calculated the risk groups of each day of data between 2022/10/05 and 2022/12/01, for each of the PLWD in our dataset. We find that on average, each of the PLWD will receive 0.25 Green alerts, 0.69 Amber alerts and 0.06 Red alerts each day. In Supplementary Information Section 14 , we visualise how this risk score varies over time and in Supplementary Information Section 15 , we present the model performance on subsets of the sensors. Additionally, in Supplementary Information Section 16 , we compare the performance between those with recurrent and non-recurrent UTIs and in Supplementary Information Section 17 , we compare the results between Male and Female participants. | Discussion
The 2020 report of the Lancet Commission on dementia prevention, treatment, and care emphasises the significance of individualised interventions to address complex medical problems in dementia, which result in unnecessary hospital admissions, accelerated functional decline, and decreased quality of life 31 . An area of priority development is infection prevention and timely detection and treatment 32 . By conducting preliminary experiments into early identification of possible UTIs in remote healthcare settings, we hope to contribute to directly addressing this priority by investigating more individualised, predictive, and preventative healthcare.
We present a machine learning pipeline for continuous UTI risk screening via analysis of passively collected in-home activity and physiology data. We considered several models and found that LR acting over 3 days attained the top performance (sensitivity of 65.2% (95% CI = 64.3–66.2) and 54.5% (95% CI = 52.7–56.4), and specificity of 70.9% (95% CI = 68.6–73.1) and 73.0% (95% CI = 71.2–74.8) on “Date-ID Split" and “Date Split" respectively). The performance was higher on “Date-ID Split" than “Date Split", which we hypothesise is due to some PLWD who have opposing labels in the training and testing data. In this case, in “Date Split", the model might over-fit to the training data from a PLWD. However, in “Date-ID Split" all data seen by the model during testing is from participants not appearing in training. The ratios of positive to negatives in the test sets of the “Date Split" and the “Date-ID Split" are 0.31 and 0.32 respectively and 0.47 and 0.47 for the training set of the “Date Split" and the “Date-ID Split" respectively.
Through stratification, risk scores were transformed into more accessible groups, allowing for the flexible management of actionable alerts within a time period. Following this, the performance on the Green and Red groups were significantly improved, achieving a sensitivity of 74.7% (95% CI = 67.9–81.5) and 69.0% (95% CI = 64.4–73.5), and specificity of 87.9% (95% CI = 85.0–90.9) and 94.1% (95% CI = 92.0–96.2) on the “Date-ID Split" and “Date Split" respectively.
SHAP analysis then highlighted the features most strongly predictive of the risk score. Our analysis shows that an increase in the number of previously confirmed UTI events was associated with a positive UTI prediction, agreeing with the literature 33 . We also highlighted the frequency of the lounge and hallway activations as negatively correlated with risk score, whilst the bedroom frequency was positively correlated. We postulate that this results from participants spending more time in bed due to interrupted sleep, or due to the effects of comorbidities. Third, increases in the standard deviation of the night time respiratory rate and the night time average heart rate were correlated with a higher risk of a UTI. Nocturnal respiratory rate has been linked to stress, reflects physiologic and pathophysiologic determinants, and has been suggested as a biomarker for impending hospitalisation 34 – 38 . Increased nocturnal awake occurrences were associated with a higher UTI risk, suggesting PLWD with UTIs were having more disturbed nights of sleep; in agreement with the literature 39 . This could additionally explain why increased standard deviation of the night time respiratory rate and the night time average heart rate were correlated with a higher risk of a UTI. Considering the clinical manifestations of UTIs in older adults, our feature importance results agree with the current understanding of UTIs in PLWD.
This study contains a few limitations that would also allow for future research directions. Whilst this work was conducted using readily available and low-cost sensors (with preliminary analysis of sensor importance presented in Supplementary Information Section 15 ), further directions of work could improve the understanding of the balance between the cost and complexity of deployment and UTI risk prediction performance. The deployed PIR sensors allow us to collect data at low cost, but they do not allow for the distinction between data generated by the person of study and other members of the house. Further work could explore methods of passively collecting personalised data. We found that the sleep mat (which does collect personalised data) significantly improved the analysis performance (Supplementary Information Section 15 ). In Supplementary Information Section 11 , we discuss the model reliability and calibration and find that our model overestimates UTI risk, likely because of the data imbalance in the training set. This motivates the applied risk stratification which allows the monitoring team to balance the sensitivity and specificity with the number of generated alerts; however, when deploying this system, work should be done to understand the trade-off between false positives and false negatives to ensure that the risk groups are well-calibrated. Finally, whilst this work focused on an important section of the population (People Living with Dementia), it would be helpful to apply these techniques in a larger cohort study or one containing older adults in community living environments such as care homes, assisted living, or skilled nursing facilities.
Our feasibility study was conducted within real-world in-home settings on data collected in (near) real-time using off-the-shelf and low-cost sensory technologies and engineered, clinically meaningful, features for predicting UTIs determined by clinicians, urine sample analysis, and a clinical monitoring team. We provide preliminary evidence for the use of such an operation and model which, with further testing, could prove to reduce delays in detecting UTIs in PLWD, and potentially reduce the number of avoidable hospital admissions when used to support clinicians with care. The proposed approach can be scaled rapidly and enable human-in-the-loop decision support by taking advantage of technological advancements, cloud computing, and machine learning. Moreover, risk stratification allows for model calibration to improve patient outcomes and care delivery whilst balancing the cost associated with testing for UTIs. Within an ongoing study or in production, the group thresholds can be modified over time to account for care team resources. SHAP analysis will enable the presentation of explainable results (such as in Supplementary Information Section 13) , allowing clinicians to explore why the UTI algorithm has made a given prediction. Additional future work will involve continuing to investigate our in-home monitoring systems’ effects on clinical outcomes, as well as patients’ quality of life.
When deployed, our model will be continually trained on new data as collected. To ensure the performance consistently meets a minimum standard, we will routinely evaluate the model on a test set and track its performance. Feature importance will also be monitored to confirm the algorithm is producing clinically founded results. This will enable rapid debugging of errors and maintain a high level of quality in predictions. | Urinary Tract Infections (UTIs) are one of the most prevalent bacterial infections in older adults and a significant contributor to unplanned hospital admissions in People Living with Dementia (PLWD), with early detection being crucial due to the predicament of reporting symptoms and limited help-seeking behaviour. The most common diagnostic tool is urine sample analysis, which can be time-consuming and is only employed where UTI clinical suspicion exists. In this method development and proof-of-concept study, participants living with dementia were monitored via low-cost devices in the home that passively measure activity, sleep, and nocturnal physiology. Using 27828 person-days of remote monitoring data (from 117 participants), we engineered features representing symptoms used for diagnosing a UTI. We then evaluate explainable machine learning techniques in passively calculating UTI risk and perform stratification on scores to support clinical translation and allow control over the balance between alert rate and sensitivity and specificity. The proposed UTI algorithm achieves a sensitivity of 65.3% (95% Confidence Interval (CI) = 64.3–66.2) and specificity of 70.9% (68.6–73.1) when predicting UTIs on unseen participants and after risk stratification, a sensitivity of 74.7% (67.9–81.5) and specificity of 87.9% (85.0–90.9). In addition, feature importance methods reveal that the largest contributions to the predictions were bathroom visit statistics, night-time respiratory rate, and the number of previous UTI events, aligning with the literature. Our machine learning method alerts clinicians of UTI risk in subjects, enabling earlier detection and enhanced screening when considering treatment.
Subject terms | Supplementary information
| Supplementary information
The online version contains supplementary material available at 10.1038/s41746-023-00995-5.
Acknowledgements
This study is funded by the UK Dementia Research Institute (UKDRI) Care Research and Technology Centre funded by the Medical Research Council (MRC), Alzheimer’s Research UK, Alzheimer’s Society (grant number: UKDRI-7002), and the UKRI Engineering and Physical Sciences Research Council (EPSRC) PROTECT Project (grant number: EP/W031892/1). Infrastructure support for this research was provided by the NIHR Imperial Biomedical Research Centre (BRC) and the UKRI Medical Research Council (MRC). The funders were not involved in the study design, data collection, data analysis or writing the manuscript.
Author contributions
AC, FP: Conceptualisation, Methodology, Software, Formal analysis, Investigation, Data Processing, Writing—Original Draft, Review and Editing, Visualisation; KZ, NFL: Writing—Original Draft, Review and Editing; CW: Writing—Original Draft, Review and Editing, Data Collection; TC: Methodology, Writing—Review and Editing; SK: Methodology, Writing—Original Draft, Review and Editing; RJ, MT, MC, KJ, RV, MK, SD: Reviewing, Data Collection; PF: Reviewing, Data Collection, Funding Acquisition JT: Data Collection; DW: Methodology, Data Collection; RN: Clinical Study Lead, Conceptualisation, Data Collection, Writing—Review and Editing, Funding Acquisition; PB: Conceptualisation, Methodology, Writing—Original Draft, Review and Editing, Supervision, Funding Acquisition; FP and KZ contributed equally to this work.
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Code availability
The code used in this study will be made available by the corresponding author upon reasonable request.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:59 | NPJ Digit Med. 2024 Jan 13; 7:11 | oa_package/9e/78/PMC10787784.tar.gz |
|
PMC10787785 | 38218742 | Introduction
In the current era, ionizing radiation has been employed worldwide in many fields, such as medical diagnostics and treatments 1 , nuclear facilities, agricultural and industrial applications, mining areas, and research 2 . In addition to these well-known fields, ionizing radiation is also used to screen people for non-medical reasons at border installations and military checkpoints, specifically to find bulk bombs or other illegal items concealed on the body, such as substances not picked up by metal detectors. Advanced imaging technology, such as accelerator-based container scanners and baggage inspection scanners with small, medium, and high energy, have been used in some countries for customs inspections against any threats that affect the countries’ border security. Some X-ray inspection systems, such as the Orion® 928DX inspection system, use X-rays with energies of nearly 168 keV in airports 3 . Furthermore, the Eagle® G60 ZBx gantry inspection system uses X-rays with dual-energy 3 and 6 MeV in ports 4 . The introduction of X-ray inspection systems at the airline’s checkpoints and border control was met with social controversy in part connected to exposure dose to travelling people, operators and bystanders. Ionizing radiations employed in such systems may provide health risks if improperly handled or the equipment’s safety features are deficient. Well-known negative consequences of exposure to ionizing radiation on bone health include a reduction in mineral density, bone growth retardation in children, and spontaneous fractures in older women 5 , and it primarily delivers harmful effects to damage DNA 6 .
To reduce the drawbacks of ionizing radiation, appropriate safety precautions must be implemented to balance their potential benefits against their drawbacks. International radiation groups have proposed specific guidelines to lessen and reduce the risks of radiation exposure to human organs. ALARA (As Low As Reasonably Achievable) is the most popular radiation protection rule that is based on reducing radiation doses by minimizing the time of radiation exposure, putting as much distance as possible between the user and the source of radiation, and the usage of radiation shields must be applied 7 . Lead and concrete are the traditional and common shielding materials utilized as protective materials in radiation facilities. Concrete is an excellent shielding material that has attracted significant interest because of its affordability, environmental friendliness, optimum density for radiation attenuation, easily shaped, requires little upkeep, and strong mechanical properties 8 . It is also widely used as an effective radiation shield for the X-ray inspection systems such as the EagleP60® ZBx and the Eagle M60® ZBx inspection systems located at the ports and border points of the country 9 . However, due to flaws like cracking and immobility, it is useless for other applications. Lead is the most commonly utilized radiation shielding material due to its high density, low cost, and superior shielding performance 10 . Even while lead seems like the perfect material for a shield, its toxicity 11 toward people and the environment makes it less valuable. The need for lead alternatives has increased recently, particularly in the medical industry 12 .
Numerous materials’ characteristics have been created and enhanced for use as radiation shielding to overcome these drawbacks. The chemical stability, flexibility, lightweight, and low cost of polymer composites with inorganic fillers like micro and nanoparticles have led to extensive research into these materials as potential substitutes for conventional radiation shielding materials 13 . There are many polymers, including PMMA 14 , polyethylene 15 , polypropylene 16 , polyvinyl chloride 17 , epoxy 18 , styrene-butadiene rubber 19 , natural rubber 20 , silicon rubber 21 , ethylene-propylene-dine monomer (EPDM) 22 , polystyrene 23 , and recycled polymers 24 , 25 , were studied as radiation-protective matrixes. Nagaraja et al. investigated the performance of radiation shielding for different types of polymers that are commonly used within the energy range 81–1332 keV. Among all the tested polymers, the lead tetragonal polymer is the most effective γ-rays absorber 26 . The polymer matrix has been filled with metal oxides like PbO 27 , CdO 28 , Bi 2 O 3 29 , ZnO 30 , Gd 2 O 3 31 , and MgO 32 to develop a radiation shield that can be used to attenuate X-rays and γ-rays.
Metal oxide fillers reinforced in polymer composites at the nanoscale can significantly improve their mechanical, electrical, and optical properties. Additionally, their small size makes these materials extremely effective at attenuating radiation. For example, El-Khatib et al. compared the radiation-shielding abilities of different loadings of micro and nano CdO distributed HDPE matrix at photon energy ranging from 59.53 to 1408.01 keV and reported that nano-CdO/HDPE composites shielded γ-rays more effectively than micro-CdO/HDPE composites at the same weight fraction 33 . Another research investigated the radiation shielding ability of epoxy-based micro and nano WO 3 and Bi 2 O 3 reinforced composites 34 . The study demonstrated that nano dopant is more successful and effective in attenuating the photons. Abbas et al. studied the effect of Bi 2 O 3 micro- and nano-particles content on silicon rubber’s (SR) γ-ray interaction parameters 35 . The attenuation coefficients of the obtained SR samples showed a clear advantage in lower energy levels compared to other energies. Furthermore, the SR’s nano-Bi 2 O 3 was superior to the SR’s micro-Bi 2 O 3 . Additionally, the mechanical results revealed that the material’s flexibility decreased as the Bi 2 O 3 filler was increased to 30%.
Poly-methyl methacrylate (PMMA) is a significant kind of polymer among thermoplastics. PMMA is an optically transparent polymer with a refractive index of 1.49 and a density of 1.20 and is frequently used as an alternative to inorganic glass 36 , 37 . PMMA is an amorphous polymer that resists corrosion, abrasion, weather, and chemicals, as well as its ideal production conditions are lightweight and resistant to breaking. Numerous products, including coatings, additives, sealants, optical fibers, and neutron stoppers, have been made with PMMA. By incorporating filler into the PMMA matrix, this versatile material’s applications may be even more varied because well-dispersed filler could improve some of its physical characteristics 38 . Chen et al. evaluated the shielding properties of different samples, including pure PMMA, PMMA/MWCNT, and PMMA/MWCNT/Bi 2 O 3 , compared with aluminum (Al). According to the electron-beam attenuation properties, the PMMA/MWCNT/Bi 2 O 3 nanocomposite was 37% lighter than Al while still providing the same level of radiation protection in the 9–20 MeV electron energy range 39 . Cao et al. investigated the performance of γ-rays shielding and the physical and mechanical characteristics of PMMA composites doped with 0–44.0 wt% Bi 2 O 3 prepared by the fast-curing technique. The results showed that for radiation energies up to 1000 keV, PMMA/Bi 2 O 3 composites showed superior γ-rays shielding performance compared to pure PMMA. Additionally, the hardness measurement shows that mechanical hardness rises with increasing loading of Bi 2 O 3 40 . Another research reported the use of PMMA/Bi 2 O 3 polymer composites as a replacement for concrete and gypsum in the construction of diagnostics radiation facilities 41 . Furthermore, lightweight, environment-friendly, and cost-effective materials based on flexible Bi-PMMA composites are investigated as radiation shielding materials suitable for low energy γ-rays 42 . Recently, PMMA polymer with different concentrations (0, 2, 5, 10, 15, and 20 wt%) of BaTiO 3 as a nanofiller was examined to be used as nuclear radiation shielding materials. The developed non-toxic, flexible, and transparent nanocomposite protective material is presented and the specimens containing 10–15 wt%, showed enhanced radiation attenuation 43 .
Zirconium oxide (ZrO 2 ), also known as Zirconia, is a material with significant technological importance due to its outstanding corrosion resistance, high strength, high chemical stability, chemical and microbiological resistance, and high mass attenuation coefficient 44 . Furthermore, due to its excellent thermal stability and low neutron absorption cross-section, Zirconia is applied in nuclear reactor technology. Wahab et al. evaluated the effect of zirconia nanoparticles on the radiation shielding performance of the lead borate glass 45 . Regarding the considerable collection of research indicated above, incorporating nanofillers into polymers is a promising method to develop novel radiation protective materials. There is also a strong need for more research to study how the filler size affects the shielding characteristics against γ-rays for various composite systems.
The survey of the literature reveals that there are very limited studies that deal with the use of nano ZrO 2 as a filler in the polymeric matrix to attenuate γ-rays. Hence, the primary goal of this work is to investigate how the particle size and weight percentage of ZrO 2 affect the ability of ZrO 2 /PMMA composites to shield against γ-rays. For this purpose, the mass attenuation coefficients of pure PMMA and PMMA loaded with 15, 30, and 45 wt% of micro and nano ZrO 2 were estimated experimentally and compared to results obtained theoretically from XCOM database. Additionally, other shielding parameters such as the linear attenuation coefficients (μ), half-value layer (HVL), tenth value layer (TVL), mean free pass (MFP), effective atomic number (Z eff ), effective electron density (N eff ), and equivalent atomic number (Z eq ), as well as exposure buildup factor (EBF) and energy absorption buildup factor (EABF) were calculated at various energies between 0.015 and 15 MeV to assess the γ-rays shielding ability of the prepared ZrO 2 /PMMA composites. | Materials and methods
Materials
Self-cured acrylic resin (Acrostone Cold Cure Acrylic Resin), a commercial product with a density of 1.18 g/cm 3 , comes in two bottles containing powder (Poly methyl methacrylate, prepolymer (–(–CH21⁄4C(CH3) COOCH3-)n-PMMA) and (MMA) monomer liquid hardener (CH21⁄4C (CH3) COOCH3, MMA), were the matrix materials employed in this work and provided by the Acrostone Dental & Medical Supplies Company in Cairo, Egypt. The physical properties of MMA liquid hardener and PMMA powder are summarized in Tables 1 and 2 , respectively. Zirconium oxide micro- and nanoparticles were employed as fillers and obtained from Nanoshel business Wilmington, DE 19808, USA. Micro Zirconium oxide particles were provided with a purity of 99.9% and an average size of about 1–2 μm. In comparison, the Zirconium oxide nanoparticles were supplied with a purity of 99.9% and an average size of about 80 nm. The physical properties of Zirconium oxide microparticles (MPs) and nanoparticles (NPs) are listed in Table 3 .
Cold (self)-cured PMMA
In comparison to heat-cured PMMA, cold-cured PMMA possesses a different composition and polymerization technique, making it unnecessary to apply thermal energy. It is also known as chemically cured or auto-polymerized PMMA, indicating that the polymerization process initiates immediately after the powder and liquid components are combined. Consequently, no heat is required for the polymerization reaction to occur since the benzoyl peroxide initiator present in the pre-polymerized PMMA pellets can be chemically activated. The advantages of cold-cured PMMA over heat-cured PMMA are its superior adaptability and dimensional stability, resulting in reduced polymerization shrinkage 47 .
Preparation of ZrO 2 /PMMA composites
This study utilized the self-curing method to fabricate pure PMMA, micro- ZrO 2 /PMMA, and nano- ZrO 2 /PMMA composites. Table 4 lists the sample codes, compositions, and densities of the produced composites. Three main groups of acrylic-resin specimens were fabricated; (a) the reference group (P-0Z) was prepared by blending self-cured PMMA powder and liquid MMA in a 3:1 by volume ratio as recommended by the manufacturer, (b) the modified micro ZrO 2 /PMMA group (P-15mZ, P-30mZ, P-45mZ) which was fabricated at constant micro filler loadings of about 15, 30 and 45 wt%, and (C) the modified nano-ZrO 2 /PMMA group (P-15nZ, P-30nZ, P-45nZ) which was fabricated with the same loadings of nanofiller.
Before preparing the samples, each component was pre-weighed according to the weight fractions listed in Table 4 , using a 0.0001 g sensitive electrical balance (Analytical Balance, GR200, Japan). The reference group (P-0Z) was prepared by mixing dry powder (PMMA) with liquid (MMA) in a clean and dry glass beaker and stirred continuously at room temperature for a maximum of 4.5 min (according to the manufacturer) to eliminate any gas bubbles from the specimens. The mixing process was performed using a mechanical mixer set to speed 20 rpm until the mixture reached the dough stage. Then, the mixture was poured into the center of an opening silicone rubber mold, shown in Fig. 1 , until the mold was filled. The mold must be slowly shaken and pulsed from side to side. The mold was then kept standing on the workbench at room temperature for 20 min after the mixing process had started to allow the mixture to thicken and harden the surface of the casting. The modified groups were prepared by mixing ZrO 2 nanoparticles or ZrO 2 microparticles with the PMMA powder in a glass beaker and mixed with an electric mixer for 20 min to create a homogenous mixture. Then, as previously mentioned, the blended powder was added as one unit to the liquid monomer in a ratio of 3:1 by volume. When the mixed acrylic resin reached the dough stage, it was packed in a silicon rubber mold for 2 h at room temperature (25 ± 2 °C) to get the specimen’s final shape, as shown in Fig. 2 .
Morphology study and structural characterization
The particle size of micro and nano ZrO 2 powder was analyzed using a transmission electron microscope (TEM) (JEM 1400 Plus, JEOL, Japan) at 200 kV. Furthermore, the cross-section morphologies of the prepared samples were examined using a scanning electron microscope (SEM) (JSM-6010LV, JEOL). To enhance the SEM image quality, a thin layer of gold (20 nm) was coated on the prepared samples using a low-vacuum sputtered coating system (JEOL-JFC-1100E). SEM images were acquired to compare the dispersion of ZrO 2 micro and nanoparticles within the PMMA matrix. TEM and SEM analyses were performed to study how the ZrO 2 particle size and its distribution affected the shielding properties of the investigated ZrO 2 /PMMA composites. The present specimens were also analyzed using a Bruker Vertex 70 infrared spectrometer in the 4000–400 range, together with a Platinum ATR unit, Germany-Ray diffractometer (Schimadzu-7000), to collect FT-IR data and examine photon absorption and transmission in the IR region. FT-IR analysis was conducted to identify the functional groups in the ZrO 2 /PMMA composites and the interaction mechanism of ZrO 2 particles and PMMA polymeric matrix.
γ-ray spectroscopic setup
The cylindrical high purity germanium detector (Model GC1520 from Canberra, United States) was employed in conjunction with a multichannel analyzer to conduct γ-ray spectroscopic measurements. The detector boasts a relative efficiency of 15% within the 50 keV to 10 MeV range, with a resolution of 1.85 keV at the 1.33 MeV γ-ray peak using Co-60 48 . The detector was encased with a 15 cm thick lead shield to mitigate background radiation. The measurements were calibrated using standardized sources, including Am-241, Ba-133, Cs-137, Co-60, and Eu-152, in the energy range between 0.05953 and 1.4081 MeV. Table 5 outlines the emitted energies and activities attributed to these sources. The experimental arrangement for the γ-ray measuring system is illustrated in Fig. 3 .
The γ-ray spectrum for each measurement was obtained based on the sample thickness, such that the statistical error was less than 1%. For example, Fig. 4 depicts the obtained spectra using a Co-60 radioactive point source in the absence and presence of the P-0Z sample. The electrical signal generated by the detector was amplified and analyzed using Canberra’s Genie 2000 data acquisition and analysis software ISO 9001. The net area beneath the photo peak was calculated and divided by the acquisition time to determine the count rate. The counting rate was determined in the presence (I) and absence (I 0 ) of the specimen, respectively. Beer–Lambert’s law (Eq. 1 ) was then utilized to determine the linear attenuation coefficient (cm −1 ) of each sample at various γ-ray energies.
Attenuation parameters calculations
The linear attenuation coefficient (μ), a significant shielding parameter for determining the impact of γ-rays at proper energy on the materials under study, can be calculated from Beer–Lambert’s law as indicated in Eq. ( 1 ) 49 : where the initial (I 0 ) and transmitted (I intensities across the sample with a thickness (t) can be experimentally calculated as discussed in “ γ-ray spectroscopic setup ” section. The MAC can be derived using Eq. ( 2 ): where ρ s is the density of the sample measured using Archimedes method 50 : where ρ s refers to the density of the samples, is the density of water at the test temperature, is the mass of the dry sample, and m 2 is the mass of the immersion sample.
In developing an appropriate radiation shielding substance, two factors must be considered: the HVL and TVL. HVL and TVL are the material thicknesses required to reduce the γ-ray intensity to 50% and 10% of its original value, respectively. Equations ( 4 ) and ( 5 ) are typically used to estimate these values 51 :
The MFP, which is defined as the average distance moved by a photon between two subsequent reactions, this parameter was given in Eq. ( 6 ) 52 :
To determine a material’s radiation shielding capabilities, it is necessary to calculate its Z eff and N eff parameters. These values are obtained from the atomic cross section (σ a ) and electron cross section (σ e ) of the material. The σ a and σ e values directly relate to the number of atoms and electrons present in a unit volume of the material. Materials with higher σ a and σ e values are more effective as shielding materials. σ a is calculated using Eq. ( 7 ) 53 and provides the probability of interaction per atom within the material’s unit volume. where N is Avogadro’s number, A i and f i are the atomic weight and fractional weight for each target element, respectively. σ e provides the probability of interaction per electron in the specimen’s unit volume and is expressed by Eq. ( 8 ) 54 : where Z i is the atomic number and f i is the target element’s fractional abundance. Along with the use of σ a and, σ e . The effective atomic number is derived by applying Eq. ( 9 ) 55 :
The N eff values are referred to the electron numbers per unit mass of the interacting target and is given by Eq. ( 10 ) 56 :
When constructing a shielding material, the two forms of buildup factor, the EABF and the EBF are crucial parameters that should be taken into account. Due to secondary γ-rays emissions 57 , buildup factors are always more than one and correct the attenuation estimations in Beer–Lambert’s law. The three stages below were followed using the Geometric Progression (GP) fitting technique to compute the EABF and EBF for the produced ZrO 2 /PMMA composites: (i) the composite’s Z eq values, which is identical to the elemental atomic number, was initially calculated with the following Eq. ( 11 ) 58 : where R 1 and R 2 represent the (μ Comp /μ total ) ratios for the elements with atomic numbers Z 1 and Z 2 , and R represents the (μ Comp /μ total ) ratio for the composite under investigation at a particular energy that falls between ratios R 1 and R 2 . (ii) After getting the Z eq values of the specified composites were then employed to estimate the GP fitting EABF and EBF coefficients [b, c, a, X k , and d] in the range of energies (0.015–15 MeV) using the following formula 59 : C 1 and C 2 are the GP fitting parameters obtained from ANSI/ANS-6.4.3 standard data 60 , equivalent to the atomic numbers Z 1 and Z 2 at which Z eq of the produced ZrO 2 /PMMA composites are located. (iii) Eventually, the resulting GP fitting parameters were used to compute the EABF and EBF using the following relationships 61 : and where where x represents the penetration depth in terms of MFP and E represents the energy of the incident photon. | Result and discussion
Characterization
Transmission electron microscope (TEM) analysis
The TEM micrographs of ZrO 2 MPs and NPs are depicted in Fig. 5 . Figure 5 a shows that ZrO 2 MPs are nearly spherical in shape and have an average particle size between 1.46 and 1.75 μm. On the other hand, Fig. 5 b demonstrates the existence of ZrO 2 NPs, which have a consistent size distribution between 7.86 and 12 nm.
Scanning electron microscope (SEM) analysis
Figure 6 displays the SEM images of P-0Z, P-15mZ, P-15nZ, P-45mZ, and P-45nZ composites. Figure 6 describes how ZrO 2 particles impacted PMMA on a micro and nanoscale. Figure 6 a depicts smooth and distinct variation compared to ZrO 2 /PMMA composites. In other words, pure PMMA (Fig. 6 a) and ZrO 2 /PMMA composites (Fig. 6 b–e) exhibit a distinct difference in morphology. The SEM images of micro and nano ZrO 2 /PMMA composite samples with identical filler wt% are compared as indicated in Fig. 6 b–d. It is clear that ZrO 2 Nps are uniformly scattered and thoroughly incorporated into the PMMA matrix in ZrO 2 /PMMA composites, which may strengthen the interfacial adhesion between the PMMA matrix and ZrO 2 NPs and offer an interconnecting structure for shielding. While large ZrO 2 MPs are not fully covered by the PMMA matrix in micro ZrO 2 /PMMA composites, and some of them peel off from the matrix due to insufficient interfacial adhesion, which behaves as voids for shielding. Additionally, it was evident that the porosity reduces and the distribution rises as the fraction of particles increases. This attribute proves that increasing the ZrO 2 content in PMMA will enhance structural, mechanical and shielding properties.
Fourier transform-infrared (FT-IR)
FT-IR spectroscopic analysis was performed on the produced samples using Bruker Vertex 70 infrared spectrometer to identify the functional groups in the ZrO 2 /PMMA composites and the interaction mechanism of ZrO 2 particles and PMMA polymeric matrix. The FT-IR spectra of P-0Z, ZrO 2 MPs, P-15mZ, P-45mZ, ZrO 2 NPs, P-15nZ, and P-45nZ samples were collected in the wavelength region of the 400–4000 cm −1 and displayed in Fig. 7 . A discrete absorption band from 1142.19 to 1239 cm −1 can be seen in the FT-IR spectrum P-0Z as depicted in Fig. 7 a, which related to the C–O–C stretching vibration. The vibrations of the –methyl group can be identified to the pair of bands at 1386 cm −1 , and 750 cm −1 . The band at 980 cm −1 is the characteristic absorption vibrations of PMMA, jointly with the bands at 1068 cm −1 and 839 cm −1 . Due to the existence of ester carbonyl group stretching vibration (Acrylate carboxyl group), a sharp intensity peak at 1725.21 cm −1 was conducted. The band at 1442.32 cm −1 can be attributed to the bending vibration of the C–H bonds of the –CH 3 group. The band at 1442.32 cm −1 can be allocated to the bending vibration of the C–H bonds of the –CH 3 group. The two bands at 2925.05 cm −1 and 2854.34 cm −1 can be linked to the C–H bond stretching vibrations of the –CH 3 and –CH 2 – groups, respectively. In addition, two faint absorption bands at 3734 cm −1 and 1644 cm −1 result from the stretching and bending vibrations of the –OH group, respectively 62 .
A broad peak can be seen in the FT-IR spectrum of ZrO 2 MPs in Fig. 7 b at 3428.96 cm −1 , 1634.05 cm −1 , and (456.10, 604 cm −1 ), which correspond to OH stretching, OH bending, and the Zr–O band, respectively. Peak locations, forms, and intensities have been determined in accordance with the fingerprint features, together with the material’s fundamental components 63 . As seen in Fig. 7 c, the spectrum of the P-15mZ composite, compared to that of P-0Z, had new peaks at 453.94 cm −1 and 599.12 cm −1 referring to the metal–oxygen bond in ZrO 2 . Furthermore, it is clear that when the weight fraction of ZrO 2 MPs was increased to 45 wt%, two peaks, as shown in Fig. 7 d, were seen at 631 cm −1 and 698 cm −1 , which are associated with stretching vibration modes of Zr–O and bending vibration of =C–H 64 . The apparent presence of these peaks indicates that ZrO 2 MPs have been successfully embedded in the PMMA polymeric matrix.
Figure 7 e displays the FT-IR spectrum of ZrO 2 NPs. The peaks at 453.92 cm −1 and 599.12 cm −1 are related to the strong metal–oxygen conjunction in ZrO 2 NPs, and two peaks at 631 cm −1 and 698 cm −1 are attributed to the Zr–O bond and bending vibration of =C–H as previously mentioned. At 574 cm −1 and 708 cm −1 , two additional bands have been observed that are connected to the ZrO 2 NPs. It is also clear from Fig. 7 f,g that the spectra of P-15nZ and P-45nZ composites exhibit behavior that is comparable to that previously reported, in conjunction with the existence of two more peaks at 551 cm −1 and 598.04 cm −1 that were formed at greater concentrations of ZrO 2 NPs. Consequently, it can be deduced from Fig. 7 that there was a chemical bond between the PMMA matrix and the ZrO 2 filler in all composites, which resulted in chemical interactions between them.
γ-ray shielding results
The MAC is a standard parameter utilized for measuring and comparing the performance of various shielding materials. Table 6 displays the experimentally measured values of MACs for all the investigated composites (pure PMMA, micro ZrO 2 /PMMA, and nano ZrO 2 /PMMA composites) in the energy range between 0.05953 and 1.4081 MeV. Furthermore, using the XCOM software, MAC for the PMMA and micro ZrO 2 /PMMA composites were generated theoretically, and the relative deviation (Δ%) between the experimental and theoretical results that were determined by using Eq. ( 16 ):
As can be seen from Table 6 , both the MAC values obtained from XCOM and those achieved by laboratory measurement exhibit good comparability. This remark is accurate for all of the energies tested. However, some minor discrepancies were discovered between the two methods. These are acceptable because ordinarily, anyone can find a few minor errors in the experimental results, but generally, the experimental results are acceptable and agree with the XCOM results. This is a crucial and significant step since it clarifies the precision of the geometry used in the lab to calculate the MAC for the PMMA and ZrO 2 /PMMA composites. According to Table 6 , the Δ% for pure PMMA (free of ZrO 2 filler) is restricted to − 0.65 and 0.94%, whereas the Δ% for 15 wt% ZrO 2 /PMMA is restricted to − 0.40 and 0.84%, and for 30 wt% ZrO 2 /PMMA ranges between − 0.94 and 0.61%. However, it can only be between − 0.89 and 0.81% for 45 wt% ZrO 2 /PMMA. These results support that the practical and theoretical results are compatible since the Δ% is less than 2%.
The experimental results of μ values of pure PMMA, micro, and nano ZrO 2 /PMMA composites filled with various concentrations (15 wt%, 30 wt%, and 45 wt%) as a function of γ-ray energies are shown in Fig. 8 . As can be seen from Fig. 8 , the energy of the incident photons and the compositions of the protective material have a significant impact on μ values. The μ values have been found to significantly increase with increasing micro- and nano-ZrO 2 concentrations in the composites and rapidly decline as photon energy increases. This trend may be demonstrated by focusing on the three primary processes by which energetic photons interact with matter; photoelectric effect, Compton scattering, and pair production, which all contribute to the energy loss of the incident photon. At energies less than 125 keV, the photoelectric effect is the primary mechanism that causes photons to be absorbed since the possibility of photoelectric absorption depends on Z 3 , where Z is the atomic number 65 . Therefore, due to element Zr with atomic number (Z = 40), μ values increase as the concentration of ZrO 2 in the PMMA matrix increases. However, as photon energy increases beyond 125 keV, the likelihood of the photoelectric effect reduces roughly according to 1/E 3 66 , where E is the energy of the incident photons, which illustrates why μ for every composite decreases slightly as the photon energy goes above 125 keV. Meanwhile, in this energy range increasing the ZrO 2 filler wt% in the PMMA matrix is slightly affect the values of μ which have nearly the same value as the photon energy increase. This result is because, at this intermediate energy range, the effect of photoelectric absorption diminishes, and the Compton scattering mechanism takes over. Besides, the cross-section of the Compton scattering is practically independent of atomic number but depends on the number of electrons per unit mass.
Additionally, the difference of μ between micro- and nano-sized ZrO 2 /PMMA was also compared, as shown in Fig. 8 . At all of the tested γ-ray energies, the nano ZrO 2 /PMMA curves always are above the micro ZrO 2 /PMMA curves for the same weight percent of ZrO 2 filler. As the size of ZrO 2 particles decreases from micro to nano size, the uniform distribution of ZrO 2 NPs over a greater surface area within the PMMA matrix would increase the likelihood of incident photons interacting with ZrO 2 NPs in nanocomposites as compared to micro composites and increase the probability of further scattering mechanisms for the photons until the photon’s energy is below 200 keV. Consequently, in PMMA-based radiation protective material, ZrO 2 NPs exhibit excellent attenuation performance than ZrO 2 MPs for the same chemical composition and weight percentage of the composite.
To evaluate the superiority of the shielding ability of nanocomposites over micro composites, the relative increase rate (δ%) in μ values between nano and micro ZrO 2 /PMMA composites was calculated according to Eq. ( 17 ) and depicted in Fig. 9 as a function of energy at various ZrO 2 loadings.
As shown in Fig. 9 , the relative increase rate (δ%) increases with an increase in ZrO 2 content; however, its value decreases as the photon’s energy increases from 0.05953 to 1.408 MeV. These findings suggest that, due to various photon interaction cross-sections at different photon energies, the size effect diminishes as photon energy increases. The absorption capability is Z-dependent when the photoelectric effect dominates at low photon energies. Due to the mid-high Z of the element zirconium in ZrO 2 particles and the low Z of the elements C, O, and H in the PMMA matrix, the photoelectric absorption of ZrO 2 particles is considerably higher than that of the PMMA matrix. As a result, these particles are extremely important for radiation shielding. At the energy of 0.05953 MeV, the δ% in the P-15nZ sample was 16.88%, compared to 17.29% in the P-30nZ sample and 17.84% in the P-45nZ sample. Since Compton scattering is more likely at higher energies and its cross-section can be regarded as the predominant interaction that does not rely on Z but rather on the free electrons, there is little difference in the ability of ZrO 2 particles compared to the PMMA matrix. As a result, the essential role of ZrO 2 particles diminishes, and the impact of particle size decreases. At the highest examined energy (1.408 MeV), the δ % was 6.67% for P-15nZ, whereas it was 7.23%and 8.60% for P-30nZ and P-45nZ samples, respectively. In summary, the δ % follows the general trend P-45nZ > P-30nZ > P-15nZ for all incident energies. So, the composite P-45nZ has superior shielding potentials over all the investigated samples.
Three primary critical radiation shielding parameters, the HVL, the TVL, and the MFP have been researched in correlation to the radiation shielding capabilities of micro- and nano-structured composites 67 . Figure 10 displays the HVL, TVL, and MFP at different energies ranging from 0.0595 MeV to 1.408 MeV. The HVL is the thinnest sample at which 50% of the original γ-ray intensity passes through it. The findings of our calculation of the HVL for the chosen composites at the energies employed for the μ data are displayed in Fig. 10 a. When analyzing the data in this Fig. 10 a. One can see a gradual increase in HVL as energy is increased from 0.0595 to 1.408 MeV. This tendency indicates that the photons’ ability to penetrate samples rises along with their energy. The lowest HVL is found at 0.0595 MeV (in the range of 0.26111–3.2299 cm), and there is a significant increase across the upward energies (6.9944–10.1189 cm at 1.408 MeV), as shown in Fig. 10 a. This fact emphasizes that as the radiation’s energy rises, more photons will be able to pass through the chosen samples. Figure 10 a even further illustrates that the effective method for reducing the HVL and improving the ability of the chosen samples to attenuate γ-ray is the addition of ZrO 2 to the PMMA matrix. Comparing P-45nZ to the other materials, P-45nZ presents the lowest HVL at any energy. Our analysis shows that the HVL appears in the following sequence: P-0Z > P-15mZ > P-15nZ > P-30mZ > P-30nZ > P-45mZ > P-45nZ. This pattern emphasizes that the addition of more ZrO 2 improves photon shielding properties because ZrO 2 is denser than purified PMMA. Thus, it is clear that ZrO 2 can reduce the HVL, making the P-45nZ composite optimal.
The TVL findings are shown as a function of energy in Fig. 10 b. The TVL values of P-0Z and P-45nZ samples at the starting energy (i.e., 0.0595 MeV) show a dropping trend from 10.7269 cm to 0.8673 cm because the TVL highly relies on the sample density at all energies. It is evident that decreasing TVL results from the rising density of the composite. The TVL trend depicted in Fig. 10 b is consistent with that in Fig. 10 a for the HVL. The highest TVL values range from 23.2349 cm for P-45nZ to 33.6143 cm for the P-0Z sample at 1.408 MeV. The high ZrO 2 content of the P-45nZ sample contributed to its high density and showed the sample’s low TVL. The reverse of the μ values is the MFP values, which are depicted in a manner comparable to that of the HVL and TVL. The smaller the MFP of a composite, the superior the radiation shielding ability. Figure 10 c depicts the relationship between the investigated composites’ MFP and the energy. At all energies, the MFP depends on the ZrO 2 content. Increasing the ZrO 2 insertion from 0 to 45 wt% in PMMA led to an increase in the density of the samples, from 1.176 for P-0Z to 1.8330 g/cm 3 for P-45nZ. Consequently, the MFP values drop from 4.6598 for P-0Z sample to 0.3767 cm for P-45nZ at 0.0595 MeV. At higher energy of 1.408 MeV the MFP drops from 14.599 to 9.90 cm. Thus, we can deduce that the P-45nZ sample needs a thinner shielding layer than the other specimens in order to prevent the same radiation, and we can also infer that an increase in energy leads to a rise in the MFP. In conclusion, increasing the content and decreasing the size of ZrO 2 particles leads to lower values for the HVL, TVL, and MFP parameters, which optimize radiation shielding.
For the examined pure PMMA and ZrO 2 /PMMA micro composites, the change of Z eff and N eff with photon energy is shown in Figs. 11 and 12 , respectively. Evidently, at low energies, the Z eff and N eff reach their maximum values at 0.02 MeV and then decline as the energy increases. This trend can be attributed to the photoelectric process’s cross-section, which is inversely proportional to photon energy as E 3.5 . However, as the photon energy exceeds 0.3 MeV, further increments of photon energy, the value of Z eff becomes virtually independent of photon energy. This behavior might be because the Compton scattering mechanism predominates. At high energies above 1.5 MeV, the value of Z eff slowly rises as the photon energy increases. The supremacy of pair production in this higher energy area can be used to explain this trend. Figure 11 also reveals that, as the concentration of ZrO 2 filler increases in the PMMA matrix, the values of Z eff increase. This increase is due to the density of ZrO 2 , which increases the overall density of the PMMA-based composites. Therefore, P-45mZ with 45% ZrO 2 is discovered to have the highest value of Z eff at all γ-ray energies. Eventually, the minimum Zeff corresponds effectively to pure PMMA with 0% of ZrO 2 , which does not contain ZrO 2 filler. As shown in Fig. 12 , N eff exhibits approximately the same behavior as Z eff since the two parameters are strongly linked.
The Z eq describes the shielding characteristics of the chosen polymers pertaining to equivalent elements and is also considered when determining the buildup factor. The composites having higher Z eq is the best radiation-protective material. Figure 13 depicts the Z eq values for the micro ZrO 2 /PMMA composites as a function of the photon energy in the range between 0.015 and 15 MeV. From Fig. 13 , it is obvious that adding ZrO 2 in increasing amounts into the PMMA matrix causes the Z eq to increase at the same γ-ray energy. Therefore, the P-0Z sample has the lowest Z eq values, as seen in Fig. 13 , whereas the P-45mZ sample has the highest values. Consequently, the P-45mZ composite has better shielding ability than other PMMA composites, which is consistent with the former results of MACs. Furthermore, it is also apparent that the Z eq increases to reach its maximum value for all the ZrO 2 /PMMA composites at 1 MeV due to the Compton scattering (CS) process. The higher observed rise in Z eq values is related to the high rates of CS interaction in the mid-(γ) energy regions, where the Z eq calculation largely depended on the ratio of (MAC CS /MAC total ), implying substantial Compton scattering in the medium energy zone. Then, Z eq drops rapidly as the γ-ray energy exceeds 1.22 MeV due to the pair production process dominating at the higher energy regions.
Figure 14 demonstrates the variations of EBF and EABF for P-0Z, P-15mZ, P-30mZ, and P-45mZ samples at various penetration depths as a function of photon energy. It is evident that the EBF and EABF values for the selected composites ascend to a maximal value at middle energies before beginning to fall. The predominant photon interaction mechanism in the low energy region is the photoelectric absorption, whose cross-section changes inversely with energy as E 3.5 . Thus, in this low-energy region, the selected composites can absorb the most photons because of the predominance of this process. Therefore, it causes the EBF and EABF values in the lower energy regions to decrease. On the other hand, pair production, another photon absorption mechanism with a cross-section that is inversely proportional to energy as E 2 , is also predominant in the higher energy area. Compton scattering, a predominant photon interaction process in the intermediate energy region, only reduces photon energy caused by scattering and cannot entirely remove the photon. Because the photon’s lifetime is longer in this energy range, it is more likely to escape from the polymer sample. The values of EBF and EABF are increased as a consequence of this process. Additionally, it is noted that repeated scattering events at large penetration depths cause an increase in the values of EBF and EABF to extremely high levels. It is essential to point out that the variance between EBF and EABF values at the same ZrO 2 concentration and the same energy is very close. Additionally, a significant decrease in the values of EBF and EABF, accompanied by a shift in their maximum values to higher energies, was observed as the ZrO 2 content increased.
The variance of EBF and EABF with the radiant energy of all the chosen composites has also been plotted in Fig. 14 a–d for certain penetrations depths up to 40 MFP to illustrate the effect of the chemical composition of the selected ZrO 2 /PMMA composites on the EBF and EABF. It is evident that the equivalent atomic number of the chosen polymers has an inverse relationship with the EBF and EABF. Thus, P-0Z, the lowest Z eq polymer, dominates EBF and EABF values at their maximums, while P-45mZ, the greatest Z eq polymer, dominates EBF and EABF values at their minimums. Because P-0Z is a polymer with low-Z components, it could have the highest EBF. Additionally, according to Fig. 14 a–d, increasing the thickness of the interacting substance, i.e. increasing the penetration depth of the chosen polymers, causes an increase in the scattering events inside the polymer. Consequently, the EBF and EABF values are incredibly high and display the highest values at the penetration depth of 40 MFP. In light of this, it can be said that P-45mZ has more vital X-ray and γ-ray shielding efficiency than P-0Z. | Result and discussion
Characterization
Transmission electron microscope (TEM) analysis
The TEM micrographs of ZrO 2 MPs and NPs are depicted in Fig. 5 . Figure 5 a shows that ZrO 2 MPs are nearly spherical in shape and have an average particle size between 1.46 and 1.75 μm. On the other hand, Fig. 5 b demonstrates the existence of ZrO 2 NPs, which have a consistent size distribution between 7.86 and 12 nm.
Scanning electron microscope (SEM) analysis
Figure 6 displays the SEM images of P-0Z, P-15mZ, P-15nZ, P-45mZ, and P-45nZ composites. Figure 6 describes how ZrO 2 particles impacted PMMA on a micro and nanoscale. Figure 6 a depicts smooth and distinct variation compared to ZrO 2 /PMMA composites. In other words, pure PMMA (Fig. 6 a) and ZrO 2 /PMMA composites (Fig. 6 b–e) exhibit a distinct difference in morphology. The SEM images of micro and nano ZrO 2 /PMMA composite samples with identical filler wt% are compared as indicated in Fig. 6 b–d. It is clear that ZrO 2 Nps are uniformly scattered and thoroughly incorporated into the PMMA matrix in ZrO 2 /PMMA composites, which may strengthen the interfacial adhesion between the PMMA matrix and ZrO 2 NPs and offer an interconnecting structure for shielding. While large ZrO 2 MPs are not fully covered by the PMMA matrix in micro ZrO 2 /PMMA composites, and some of them peel off from the matrix due to insufficient interfacial adhesion, which behaves as voids for shielding. Additionally, it was evident that the porosity reduces and the distribution rises as the fraction of particles increases. This attribute proves that increasing the ZrO 2 content in PMMA will enhance structural, mechanical and shielding properties.
Fourier transform-infrared (FT-IR)
FT-IR spectroscopic analysis was performed on the produced samples using Bruker Vertex 70 infrared spectrometer to identify the functional groups in the ZrO 2 /PMMA composites and the interaction mechanism of ZrO 2 particles and PMMA polymeric matrix. The FT-IR spectra of P-0Z, ZrO 2 MPs, P-15mZ, P-45mZ, ZrO 2 NPs, P-15nZ, and P-45nZ samples were collected in the wavelength region of the 400–4000 cm −1 and displayed in Fig. 7 . A discrete absorption band from 1142.19 to 1239 cm −1 can be seen in the FT-IR spectrum P-0Z as depicted in Fig. 7 a, which related to the C–O–C stretching vibration. The vibrations of the –methyl group can be identified to the pair of bands at 1386 cm −1 , and 750 cm −1 . The band at 980 cm −1 is the characteristic absorption vibrations of PMMA, jointly with the bands at 1068 cm −1 and 839 cm −1 . Due to the existence of ester carbonyl group stretching vibration (Acrylate carboxyl group), a sharp intensity peak at 1725.21 cm −1 was conducted. The band at 1442.32 cm −1 can be attributed to the bending vibration of the C–H bonds of the –CH 3 group. The band at 1442.32 cm −1 can be allocated to the bending vibration of the C–H bonds of the –CH 3 group. The two bands at 2925.05 cm −1 and 2854.34 cm −1 can be linked to the C–H bond stretching vibrations of the –CH 3 and –CH 2 – groups, respectively. In addition, two faint absorption bands at 3734 cm −1 and 1644 cm −1 result from the stretching and bending vibrations of the –OH group, respectively 62 .
A broad peak can be seen in the FT-IR spectrum of ZrO 2 MPs in Fig. 7 b at 3428.96 cm −1 , 1634.05 cm −1 , and (456.10, 604 cm −1 ), which correspond to OH stretching, OH bending, and the Zr–O band, respectively. Peak locations, forms, and intensities have been determined in accordance with the fingerprint features, together with the material’s fundamental components 63 . As seen in Fig. 7 c, the spectrum of the P-15mZ composite, compared to that of P-0Z, had new peaks at 453.94 cm −1 and 599.12 cm −1 referring to the metal–oxygen bond in ZrO 2 . Furthermore, it is clear that when the weight fraction of ZrO 2 MPs was increased to 45 wt%, two peaks, as shown in Fig. 7 d, were seen at 631 cm −1 and 698 cm −1 , which are associated with stretching vibration modes of Zr–O and bending vibration of =C–H 64 . The apparent presence of these peaks indicates that ZrO 2 MPs have been successfully embedded in the PMMA polymeric matrix.
Figure 7 e displays the FT-IR spectrum of ZrO 2 NPs. The peaks at 453.92 cm −1 and 599.12 cm −1 are related to the strong metal–oxygen conjunction in ZrO 2 NPs, and two peaks at 631 cm −1 and 698 cm −1 are attributed to the Zr–O bond and bending vibration of =C–H as previously mentioned. At 574 cm −1 and 708 cm −1 , two additional bands have been observed that are connected to the ZrO 2 NPs. It is also clear from Fig. 7 f,g that the spectra of P-15nZ and P-45nZ composites exhibit behavior that is comparable to that previously reported, in conjunction with the existence of two more peaks at 551 cm −1 and 598.04 cm −1 that were formed at greater concentrations of ZrO 2 NPs. Consequently, it can be deduced from Fig. 7 that there was a chemical bond between the PMMA matrix and the ZrO 2 filler in all composites, which resulted in chemical interactions between them.
γ-ray shielding results
The MAC is a standard parameter utilized for measuring and comparing the performance of various shielding materials. Table 6 displays the experimentally measured values of MACs for all the investigated composites (pure PMMA, micro ZrO 2 /PMMA, and nano ZrO 2 /PMMA composites) in the energy range between 0.05953 and 1.4081 MeV. Furthermore, using the XCOM software, MAC for the PMMA and micro ZrO 2 /PMMA composites were generated theoretically, and the relative deviation (Δ%) between the experimental and theoretical results that were determined by using Eq. ( 16 ):
As can be seen from Table 6 , both the MAC values obtained from XCOM and those achieved by laboratory measurement exhibit good comparability. This remark is accurate for all of the energies tested. However, some minor discrepancies were discovered between the two methods. These are acceptable because ordinarily, anyone can find a few minor errors in the experimental results, but generally, the experimental results are acceptable and agree with the XCOM results. This is a crucial and significant step since it clarifies the precision of the geometry used in the lab to calculate the MAC for the PMMA and ZrO 2 /PMMA composites. According to Table 6 , the Δ% for pure PMMA (free of ZrO 2 filler) is restricted to − 0.65 and 0.94%, whereas the Δ% for 15 wt% ZrO 2 /PMMA is restricted to − 0.40 and 0.84%, and for 30 wt% ZrO 2 /PMMA ranges between − 0.94 and 0.61%. However, it can only be between − 0.89 and 0.81% for 45 wt% ZrO 2 /PMMA. These results support that the practical and theoretical results are compatible since the Δ% is less than 2%.
The experimental results of μ values of pure PMMA, micro, and nano ZrO 2 /PMMA composites filled with various concentrations (15 wt%, 30 wt%, and 45 wt%) as a function of γ-ray energies are shown in Fig. 8 . As can be seen from Fig. 8 , the energy of the incident photons and the compositions of the protective material have a significant impact on μ values. The μ values have been found to significantly increase with increasing micro- and nano-ZrO 2 concentrations in the composites and rapidly decline as photon energy increases. This trend may be demonstrated by focusing on the three primary processes by which energetic photons interact with matter; photoelectric effect, Compton scattering, and pair production, which all contribute to the energy loss of the incident photon. At energies less than 125 keV, the photoelectric effect is the primary mechanism that causes photons to be absorbed since the possibility of photoelectric absorption depends on Z 3 , where Z is the atomic number 65 . Therefore, due to element Zr with atomic number (Z = 40), μ values increase as the concentration of ZrO 2 in the PMMA matrix increases. However, as photon energy increases beyond 125 keV, the likelihood of the photoelectric effect reduces roughly according to 1/E 3 66 , where E is the energy of the incident photons, which illustrates why μ for every composite decreases slightly as the photon energy goes above 125 keV. Meanwhile, in this energy range increasing the ZrO 2 filler wt% in the PMMA matrix is slightly affect the values of μ which have nearly the same value as the photon energy increase. This result is because, at this intermediate energy range, the effect of photoelectric absorption diminishes, and the Compton scattering mechanism takes over. Besides, the cross-section of the Compton scattering is practically independent of atomic number but depends on the number of electrons per unit mass.
Additionally, the difference of μ between micro- and nano-sized ZrO 2 /PMMA was also compared, as shown in Fig. 8 . At all of the tested γ-ray energies, the nano ZrO 2 /PMMA curves always are above the micro ZrO 2 /PMMA curves for the same weight percent of ZrO 2 filler. As the size of ZrO 2 particles decreases from micro to nano size, the uniform distribution of ZrO 2 NPs over a greater surface area within the PMMA matrix would increase the likelihood of incident photons interacting with ZrO 2 NPs in nanocomposites as compared to micro composites and increase the probability of further scattering mechanisms for the photons until the photon’s energy is below 200 keV. Consequently, in PMMA-based radiation protective material, ZrO 2 NPs exhibit excellent attenuation performance than ZrO 2 MPs for the same chemical composition and weight percentage of the composite.
To evaluate the superiority of the shielding ability of nanocomposites over micro composites, the relative increase rate (δ%) in μ values between nano and micro ZrO 2 /PMMA composites was calculated according to Eq. ( 17 ) and depicted in Fig. 9 as a function of energy at various ZrO 2 loadings.
As shown in Fig. 9 , the relative increase rate (δ%) increases with an increase in ZrO 2 content; however, its value decreases as the photon’s energy increases from 0.05953 to 1.408 MeV. These findings suggest that, due to various photon interaction cross-sections at different photon energies, the size effect diminishes as photon energy increases. The absorption capability is Z-dependent when the photoelectric effect dominates at low photon energies. Due to the mid-high Z of the element zirconium in ZrO 2 particles and the low Z of the elements C, O, and H in the PMMA matrix, the photoelectric absorption of ZrO 2 particles is considerably higher than that of the PMMA matrix. As a result, these particles are extremely important for radiation shielding. At the energy of 0.05953 MeV, the δ% in the P-15nZ sample was 16.88%, compared to 17.29% in the P-30nZ sample and 17.84% in the P-45nZ sample. Since Compton scattering is more likely at higher energies and its cross-section can be regarded as the predominant interaction that does not rely on Z but rather on the free electrons, there is little difference in the ability of ZrO 2 particles compared to the PMMA matrix. As a result, the essential role of ZrO 2 particles diminishes, and the impact of particle size decreases. At the highest examined energy (1.408 MeV), the δ % was 6.67% for P-15nZ, whereas it was 7.23%and 8.60% for P-30nZ and P-45nZ samples, respectively. In summary, the δ % follows the general trend P-45nZ > P-30nZ > P-15nZ for all incident energies. So, the composite P-45nZ has superior shielding potentials over all the investigated samples.
Three primary critical radiation shielding parameters, the HVL, the TVL, and the MFP have been researched in correlation to the radiation shielding capabilities of micro- and nano-structured composites 67 . Figure 10 displays the HVL, TVL, and MFP at different energies ranging from 0.0595 MeV to 1.408 MeV. The HVL is the thinnest sample at which 50% of the original γ-ray intensity passes through it. The findings of our calculation of the HVL for the chosen composites at the energies employed for the μ data are displayed in Fig. 10 a. When analyzing the data in this Fig. 10 a. One can see a gradual increase in HVL as energy is increased from 0.0595 to 1.408 MeV. This tendency indicates that the photons’ ability to penetrate samples rises along with their energy. The lowest HVL is found at 0.0595 MeV (in the range of 0.26111–3.2299 cm), and there is a significant increase across the upward energies (6.9944–10.1189 cm at 1.408 MeV), as shown in Fig. 10 a. This fact emphasizes that as the radiation’s energy rises, more photons will be able to pass through the chosen samples. Figure 10 a even further illustrates that the effective method for reducing the HVL and improving the ability of the chosen samples to attenuate γ-ray is the addition of ZrO 2 to the PMMA matrix. Comparing P-45nZ to the other materials, P-45nZ presents the lowest HVL at any energy. Our analysis shows that the HVL appears in the following sequence: P-0Z > P-15mZ > P-15nZ > P-30mZ > P-30nZ > P-45mZ > P-45nZ. This pattern emphasizes that the addition of more ZrO 2 improves photon shielding properties because ZrO 2 is denser than purified PMMA. Thus, it is clear that ZrO 2 can reduce the HVL, making the P-45nZ composite optimal.
The TVL findings are shown as a function of energy in Fig. 10 b. The TVL values of P-0Z and P-45nZ samples at the starting energy (i.e., 0.0595 MeV) show a dropping trend from 10.7269 cm to 0.8673 cm because the TVL highly relies on the sample density at all energies. It is evident that decreasing TVL results from the rising density of the composite. The TVL trend depicted in Fig. 10 b is consistent with that in Fig. 10 a for the HVL. The highest TVL values range from 23.2349 cm for P-45nZ to 33.6143 cm for the P-0Z sample at 1.408 MeV. The high ZrO 2 content of the P-45nZ sample contributed to its high density and showed the sample’s low TVL. The reverse of the μ values is the MFP values, which are depicted in a manner comparable to that of the HVL and TVL. The smaller the MFP of a composite, the superior the radiation shielding ability. Figure 10 c depicts the relationship between the investigated composites’ MFP and the energy. At all energies, the MFP depends on the ZrO 2 content. Increasing the ZrO 2 insertion from 0 to 45 wt% in PMMA led to an increase in the density of the samples, from 1.176 for P-0Z to 1.8330 g/cm 3 for P-45nZ. Consequently, the MFP values drop from 4.6598 for P-0Z sample to 0.3767 cm for P-45nZ at 0.0595 MeV. At higher energy of 1.408 MeV the MFP drops from 14.599 to 9.90 cm. Thus, we can deduce that the P-45nZ sample needs a thinner shielding layer than the other specimens in order to prevent the same radiation, and we can also infer that an increase in energy leads to a rise in the MFP. In conclusion, increasing the content and decreasing the size of ZrO 2 particles leads to lower values for the HVL, TVL, and MFP parameters, which optimize radiation shielding.
For the examined pure PMMA and ZrO 2 /PMMA micro composites, the change of Z eff and N eff with photon energy is shown in Figs. 11 and 12 , respectively. Evidently, at low energies, the Z eff and N eff reach their maximum values at 0.02 MeV and then decline as the energy increases. This trend can be attributed to the photoelectric process’s cross-section, which is inversely proportional to photon energy as E 3.5 . However, as the photon energy exceeds 0.3 MeV, further increments of photon energy, the value of Z eff becomes virtually independent of photon energy. This behavior might be because the Compton scattering mechanism predominates. At high energies above 1.5 MeV, the value of Z eff slowly rises as the photon energy increases. The supremacy of pair production in this higher energy area can be used to explain this trend. Figure 11 also reveals that, as the concentration of ZrO 2 filler increases in the PMMA matrix, the values of Z eff increase. This increase is due to the density of ZrO 2 , which increases the overall density of the PMMA-based composites. Therefore, P-45mZ with 45% ZrO 2 is discovered to have the highest value of Z eff at all γ-ray energies. Eventually, the minimum Zeff corresponds effectively to pure PMMA with 0% of ZrO 2 , which does not contain ZrO 2 filler. As shown in Fig. 12 , N eff exhibits approximately the same behavior as Z eff since the two parameters are strongly linked.
The Z eq describes the shielding characteristics of the chosen polymers pertaining to equivalent elements and is also considered when determining the buildup factor. The composites having higher Z eq is the best radiation-protective material. Figure 13 depicts the Z eq values for the micro ZrO 2 /PMMA composites as a function of the photon energy in the range between 0.015 and 15 MeV. From Fig. 13 , it is obvious that adding ZrO 2 in increasing amounts into the PMMA matrix causes the Z eq to increase at the same γ-ray energy. Therefore, the P-0Z sample has the lowest Z eq values, as seen in Fig. 13 , whereas the P-45mZ sample has the highest values. Consequently, the P-45mZ composite has better shielding ability than other PMMA composites, which is consistent with the former results of MACs. Furthermore, it is also apparent that the Z eq increases to reach its maximum value for all the ZrO 2 /PMMA composites at 1 MeV due to the Compton scattering (CS) process. The higher observed rise in Z eq values is related to the high rates of CS interaction in the mid-(γ) energy regions, where the Z eq calculation largely depended on the ratio of (MAC CS /MAC total ), implying substantial Compton scattering in the medium energy zone. Then, Z eq drops rapidly as the γ-ray energy exceeds 1.22 MeV due to the pair production process dominating at the higher energy regions.
Figure 14 demonstrates the variations of EBF and EABF for P-0Z, P-15mZ, P-30mZ, and P-45mZ samples at various penetration depths as a function of photon energy. It is evident that the EBF and EABF values for the selected composites ascend to a maximal value at middle energies before beginning to fall. The predominant photon interaction mechanism in the low energy region is the photoelectric absorption, whose cross-section changes inversely with energy as E 3.5 . Thus, in this low-energy region, the selected composites can absorb the most photons because of the predominance of this process. Therefore, it causes the EBF and EABF values in the lower energy regions to decrease. On the other hand, pair production, another photon absorption mechanism with a cross-section that is inversely proportional to energy as E 2 , is also predominant in the higher energy area. Compton scattering, a predominant photon interaction process in the intermediate energy region, only reduces photon energy caused by scattering and cannot entirely remove the photon. Because the photon’s lifetime is longer in this energy range, it is more likely to escape from the polymer sample. The values of EBF and EABF are increased as a consequence of this process. Additionally, it is noted that repeated scattering events at large penetration depths cause an increase in the values of EBF and EABF to extremely high levels. It is essential to point out that the variance between EBF and EABF values at the same ZrO 2 concentration and the same energy is very close. Additionally, a significant decrease in the values of EBF and EABF, accompanied by a shift in their maximum values to higher energies, was observed as the ZrO 2 content increased.
The variance of EBF and EABF with the radiant energy of all the chosen composites has also been plotted in Fig. 14 a–d for certain penetrations depths up to 40 MFP to illustrate the effect of the chemical composition of the selected ZrO 2 /PMMA composites on the EBF and EABF. It is evident that the equivalent atomic number of the chosen polymers has an inverse relationship with the EBF and EABF. Thus, P-0Z, the lowest Z eq polymer, dominates EBF and EABF values at their maximums, while P-45mZ, the greatest Z eq polymer, dominates EBF and EABF values at their minimums. Because P-0Z is a polymer with low-Z components, it could have the highest EBF. Additionally, according to Fig. 14 a–d, increasing the thickness of the interacting substance, i.e. increasing the penetration depth of the chosen polymers, causes an increase in the scattering events inside the polymer. Consequently, the EBF and EABF values are incredibly high and display the highest values at the penetration depth of 40 MFP. In light of this, it can be said that P-45mZ has more vital X-ray and γ-ray shielding efficiency than P-0Z. | Conclusion
In the current study, seven PMMA-based polymer samples were prepared and reinforced with ZrO 2 MPs and NPs at concentrations of 15, 30, and 45 wt% to examine their radiation shielding capabilities for diverse purposes. The investigated composites are coded as P-0Z, P-15mZ, P-15nZ, P-30mZ, P-30nZ, P-45mZ, and P-45nZ. TEM was used to measure the average size of and ZrO 2 MPs and NPs. Furthermore, the SEM was used to study the morphology and distribution of ZrO 2 MPs and NPs within the prepared composites. The analysis revealed that ZrO 2 NPs had a uniform distribution inside the composites along with a decline in the porosity of the sample in comparison to the ZrO 2 MPs. The characteristics of ZrO 2 /PMMA molecules were also investigated using FT-IR. The MAC was calculated experimentally using the HPGe detector and five standard radioactive point sources. The experimental results significantly agreed with those obtained theoretically from the XCOM database, indicating the precision of the setup used for computing the MAC for the prepared composites. The experimental findings showed that the prepared samples’ ability to attenuate γ-rays at all the examined energies depends on the size and concentration of ZrO 2 particles. The findings of this research also demonstrated that PMMA filled with ZrO 2 NPs has higher μ values than PMMA filled with ZrO 2 MPs and pure PMMA at all selected energies. P-45nZ sample had the highest μ values, which varied between 2.6546 and 0.0991 cm −1 as γ-ray photon energy increased from 0.0595 to 1.408 MeV, respectively. Actually, the MAC for the P-45nZ sample is 1.448 cm 2 /g at 59.5 keV, which is higher than the values reported in the literature and very close to conventional lead at 661.66 keV. Furthermore, the highest relative increase rate in μ values between nano and micro ZrO 2 /PMMA composites was 17.84% reported for the sample P-45nZ at 59.53 keV. The HVL, TVL, and MFP also demonstrated the superiority of ZrO 2 NPs over ZrO 2 MPs. Z eff and N eff were increased by increasing the content of ZrO 2 to the PMMA, which improved the γ-ray shielding efficiency. Due to their easy and quick manufacture, simple processing, non-toxic, lightweight, cost-effective, and environmental friendliness, the proposed composites have advantages over lead materials. Therefore, the developed nano ZrO 2 /PMMA composites are effective shielding materials that can be used to reduce the gamma dose in radiation facilities. Future research could further examine the capability of the proposed composites in shielding neutrons. | This research aimed to examine the radiation shielding properties of unique polymer composites for medical and non-medical applications. For this purpose, polymer composites, based on poly methyl methacrylate (PMMA) as a matrix, were prepared and reinforced with micro- and nanoparticles of ZrO 2 fillers at a loading of 15%, 30%, and 45% by weight. Using the high purity germanium (HPGe) detector, the suggested polymer composites’ shielding characteristics were assessed for various radioactive sources. The experimental values of the mass attenuation coefficients (MAC) of the produced composites agreed closely with those obtained theoretically from the XCOM database. Different shielding parameters were estimated at a broad range of photon energies, including the linear attenuation coefficient (μ), tenth value layer (TVL), half value layer (HVL), mean free path (MFP), effective electron density (N eff ), effective atomic number (Z eff ), and equivalent atomic number (Z eq ), as well as exposure buildup factor (EBF) and energy absorption buildup factor (EABF) to provide more shielding information about the penetration of γ-rays into the chosen composites. The results showed that increasing the content of micro and nano ZrO 2 particles in the PMMA matrix increases μ values and decreases HVL, TVL, and MFP values. P-45nZ sample with 45 wt% of ZrO 2 nanoparticles had the highest μ values, which varied between 2.6546 and 0.0991 cm −1 as γ-ray photon energy increased from 0.0595 to 1.408 MeV, respectively. Furthermore, the highest relative increase rate in μ values between nano and micro composites was 17.84%, achieved for the P-45nZ sample at 59.53 keV. These findings demonstrated that ZrO 2 nanoparticles shield radiation more effectively than micro ZrO 2 even at the same photon energy and filler wt%. Thus, the proposed nano ZrO 2 /PMMA composites can be used as effective shielding materials to lessen the transmitted radiation dose in radiation facilities.
Subject terms
Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB). | Author contributions
A.M.E.-K. proposed the main aim and the methodology of the research. A.Y.E. and M.T.A. prepared the composite materials. Mahmoud I. Abbas depicted all the figures. M.T.A. and A.Y. E. wrote the main manuscript text. All authors reviewed and revised the manuscript.
Funding
Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB).
Data availability
All data generated or analyzed during this study are included in this published article.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:59 | Sci Rep. 2024 Jan 13; 14:1279 | oa_package/c1/8e/PMC10787785.tar.gz |
|
PMC10787786 | 38218885 | Introduction
Antimicrobial stewardship aims to optimise drug use to prolong current therapeutic effectiveness and combat antimicrobial resistance (AMR) 1 . One key aspect of antimicrobial stewardship is the route of administration. It is common for critically ill patients to be given empirical intravenous (IV) antibiotic therapy upon admission due to rapid delivery, high bioavailability, and uncertainty surrounding a potential infection. Then later in the treatment regime once the patient is stabilized and the infection is understood, their antibiotics are often switched to an oral administration route. There is a well described focus to switch from IV-to-oral administration as early as possible and to use more oral drugs when appropriate, given they are often equally effective and can reduce side effects during prolonged exposure 2 – 5 . In a range of infectious diseases that were traditionally treated with IV only (e.g., bacteremia, endocarditis, and bone and joint infections), recent studies have demonstrated that oral therapy can be non-inferior to IV in efficacy 6 – 11 . Furthermore, reducing the unnecessary use of indwelling IV devices is a well established patient safety and infection prevention priority to minimise the risk of healthcare associated infections (HCAIs) 12 . Beyond the infection complications of IV catheters, oral administration is more comfortable for the patient, reduces nurses’ workload, and allows for easy discharge from the hospital. Furthermore, oral therapy is cheaper and more cost-effective 13 . The UK Health Security Agency recently published national antimicrobial IV-to-oral switch (IVOS) criteria for early switching 14 . The requirements were developed based on expert consensus and primarily revolve around the patient’s clinical and infection markers improving as well as specific points with regards to absorption, bioavailability, and infection type.
Despite significant evidence, the uptake of early oral therapy remains low 15 , 16 , and beyond guidelines 14 there is a lack of research in IV-to-oral decision support systems. Given this, we decided to investigate if a machine learning based clinical decision support system (CDSS) could assist antibiotic switch decision making at the individual patient level. More specifically neural network models were developed to predict, based on routinely collected clinical parameters, whether a patient could be suitable for switching from IV-to-oral antibiotics on any given day. ICU data was utilised given it is widely available, comprehensive, and if a CDSS can be developed for critical patients then it can likely be adapted to less severe settings. Many CDSSs utilising machine learning have been developed to assist with other aspects of antimicrobial use 17 – 19 ; however, limited clinical utilisation and adoption has been seen 20 . As such, when tackling this problem we wanted to ensure our CDSS solution was simple, fair, interpretable, and generalisable to maximise the ability for clinical translation. By simple we mean the model architecture can be understood by non-experts, while fair infers model performance is not biased to particular sensitive attributes or protected characteristics. Interpretability means predictions can more easily be understood, explained, and trusted. Finally, a model is generalisable if it can be applied to many healthcare settings with consistent performance. We imagine by providing individualised antibiotic switch estimations such a system could support patient-centric decisions and provide assurance on if switching could be appropriate or not in a given clinical context. Figure 1 shows an overview of this research. | Methods
Datasets
Two publicly available large de-identified real-world clinical datasets containing routinely collected EHR information were used within this research. MIMIC-IV (4th version of the Medical Information Mart for Intensive Care database) which contains over 40,000 patients admitted to the Beth Israel Deaconess Medical Center (BIDMC) in Boston, Massachusetts between 2008 and 2019 21 , 22 , was used for feature selection, model optimisation and hold out testing. Meanwhile, the eICU Collaborative Research Database contains data for over 200,000 admissions to ICUs across the United States from 2014 to 2015 22 – 24 , was used for transfer learning to confirm generlisability. Our study complies with all the data use and ethical regulations required to access the datasets. For both datasets, the patient population was filtered to those who received IV and oral antibiotic treatment within the ICU (IV treatment was limited to less than 8 days). Unfortunately, the datasets used in this research do not contain explicit information on if, when, or why an IV-to-oral switch was considered. However, by utilising the available prescribing data and taking what the clinicians actually did as a label we can approximate the prescribing behaviour and train a machine learning model. We therefore focused on making a route of administration prediction for each day the patient was on antibiotics given clinical decisions regarding antimicrobial treatment are most often made on a daily basis. As such negative switch labels were defined as each day a patient was on IV antibiotics, while positive labels were defined as every other day (i.e., where the patient was on oral but not IV antibiotics). The antibiotic spectrum index (ASI) from 25 was used to assess the average breadth of activity of IV and oral treatment regimes. By looking at the ASI on the day before switching and the first day of only oral administration we can understand how a change in route of administration is most often associated with the ASI.
Feature selection
Our aim was to make a model that through utilising routinely available patient vitals could act as a starting point for the decision making process and flag when a switch could be considered for a particular patient. The latest UK Health Security Agency (UKHSA) IVOS criteria 14 were analysed and ten related features were extracted from the datasets. Specifically: heart rate, respiratory rate, temperature, O2 saturation pulseoxymetry, systolic blood pressure, diastolic blood pressure, mean blood pressure, GCS motor response, GCS verbal response, and GCS motor eye opening (Supplementary Table 1) . White Cell Count and C-Reactive Protein were excluded due to data missingness, requirement for a blood test, and UKHSA guidelines stating that they should be considered but are not necessary for a switch. Other important aspects of the guidelines such as infection type and absorption status, were also not included as input features to the model as much of this data was unavailable or collected in a way that makes it unsuitable for machine learning. Furthermore, evidence surrounding these is constantly changing 8 , 11 , 43 . We aimed to create a simple, generalisable model that uses only routinely available patient data and has the potential to be used in many different healthcare settings. The Canonical Time-series Characteristics (Catch22) methodology 44 (along with the mean and variance) was utilised through sktime 45 , 46 to transform temporal data into daily tabular values. This was done for each specific day and the whole of the current stay. In addition, the difference between transformed values for a given day and the preceding day was calculated. SHapley Additive exPlanations (SHAP) values 26 and a genetic algorithm 47 were then used for feature selection. Specifically, an excessively large neural network with 851,729 trainable parameters was preliminary trained, SHAP values were calculated and those features with a value of greater than or equal to 0.5 were selected for use in the genetic algorithm. The genetic algorithm optimised for AUROC and was run twice. Once for a simple set of 5 features and the second without a limitation on the number of features. 10 iterations with 50 individuals and 25 iterations with 20 individuals were used respectively.
Model development
The MIMIC-IV EHR dataset was randomly split (50%, 50%) based on patients ICU ‘stay_id’ into a preprocessing and a hold-out set in order to generalise switching prescribing behavior and get a reliable unbiased estimate of the models performance given the selected hyperparameters and feature set. The preprocessing set was split randomly into training, validation, and testing sets for feature selection as discussed above with Pytorch 48 used to create the neural networks. After feature selection, optuna 49 with the objective of maximising the AUROC was used to select the models hyperparameters, and optimal alternative cutoff thresholds were determined from the preprocessing validation subset. Youden’s Index 27 was used to optimise the AUROC, while finding the point where precision, recall and the F1 score were equal was used as a stringent cutoff for reducing the FPR. Subsequently, once the features and models were finalised the unseen hold-out set was randomly split 10 times into stratified training, validation, and testing sets for evaluation. Specifically, 10 naive models based on the previously identified features and model hyperparameters were trained and the final performance of such models was evaluated. The synthetic minority oversampling technique 50 was used during training to address label class imbalance. The Adam optimiser 51 was used with binary cross entropy with logits loss. The training utilised 10 epochs, and the model with the greatest AUROC on the validation dataset was selected as the final model to obtain results on the unseen test set.
Model evaluation
Standard ML metrics were used to evaluate model performance. Specifically for the switch classification task the AUROC, accuracy, precision, TPR, FPR, F1 score, and Area Under the Precision Recall curve (AUPRC) were calculated. The standard deviation was calculated to indicate the variation in results. To provide a baseline for comparison two infection markers that are clearly defined within the latest guidelines 14 were separately also used for predicting when switching could be appropriate in each patient. Specifically, their temperature must have been between 36 °C and 38 °C for the past 24 h and the Early Warning Score must be decreasing, upon which a switch would then be suggested for the rest of that patient’s stay. It was not possible to include every aspect of the guidelines due to many being ambiguous and not recorded within the data. However, it acts as a fair comparison to our models as it utilises similar patient data and actually contains additional information not fed into our models such as the inspired O2 fraction. The best performing final model and its respective hold-out split were used to break the distribution of labels and predictions down by IV treatment duration, to evaluate how predictions compare temporally to the real labels and discern when the model performed well vs poorly (Fig. 2 ). To understand the value of the models switch predictions and how they relate to patient outcomes, the difference in days between our predicted switch events and real switch events was calculated and the mean LOS and mortality outcomes were taken (Fig. 3 , Supplementary Table 4) . Furthermore, we analysed whether there was a variation in the remaining ICU length of stay (LOS) for patients who remained on IV vs those who switched on that day (Supplementary Fig. 3) . This was done for dates with 2 to 7 days of previous IV treatment based on the dissimilarity between model predictions and labels on those days (Fig. 2 ). For statistical analysis the non-parametric Wilcoxon rank-sum (Mann-Whitney U) test with alpha set at 0.05 was used to test if the difference in means was statistical significant given the non-normal data distribution. Effect sizes were calculated using Cohen’s d method with pooled standard deviation. Models were evaluated using functions and metrics from the Scikit-learn and SciPy libraries 52 , 53 .
To further validate findings, evaluations were performed on specific patient populations and infectious diseases within MIMIC. Antibiotics with incomplete oral absorption (bioavailability < 90%) were determined through consultation with a pharmacist and a literature search on PubMed, the Electronic Medicines Compendium, and UpToDate. The final list of antibiotics with incomplete oral absorption is shown in Supplementary Table 7 . Total parenteral nutrition was used as a proxy for poor oral absorption (malabsorption) while hospital ICD diagnostic codes were used to identify patients with UTI’s, pneumonia, and sepsis. These infections were chosen as they are highly prevalent in the dataset and UTI’s/pneumonia are commonly treated with oral antibiotics but sepsis sees less oral utilisation. Note that infection types in MIMIC are linked to the hospital stay ‘hadm_id’ only and not the specific ICU stay ‘stay_id’ as diagnoses are only coded for billing purposes upon hospital discharge. The best performing short and long models from the MIMIC hold-out set were then evaluated on data extracted from the eICU database via transfer learning to re-train and subsequently test the models. The same data processing pipeline was used for eICU and transfer learning utilised the same procedure as with evaluation on the MIMIC hold-out set except the models parameters were initialised with the best performing final MIMIC trained models.
The best performing final short and long models trained on the MIMIC hold-out set were used for fairness and interpretability research. SimplEx 29 was used as a post-hoc explanation methodology to extract similar patient examples, their importance, and the contribution of each feature for each example. To this extent first, the corpus and test latent representations are computed. SimplEx was then fitted and the integrated jacobian decomposition for a particular patient was calculated and displayed. To simplify visualisations only those examples with an importance greater than 0.1 are shown. To assess model fairness the demanding equalised odds (EO) metric was used given we want to acknowledge and ideally minimise false positives as well as obtain equal performance across sensitive attribute classes. We defined that EO was achieved for a given sensitive attribute group if the TPR was not less than 0.1 from the global average and the FPR was not greater than 0.1 from the global average. EO was assessed utilising the 1st threshold for the sensitive attributes age (grouped into brackets based on the nearest decade), sex, race, insurance, language, and marital status. Threshold optimisation 30 was then employed to see if the models fairness could be improved. Specifically, the postprocessing thresholdoptimizer method from fairlearn 54 was used with the balanced accuracy objective and either the equalised odds, FPR parity or TPR parity constraint.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. | Results
Data
8694 unique intensive care unit (ICU) stays, were extracted from the MIMIC dataset 21 , 22 , with 1,668 from eICU 22 – 24 . 10 clinical features were selected based on the UK antimicrobial IVOS criteria 14 (Supplementary Table. 1) . Transformation of those temporally dynamic values into time series features as detailed in Section 4.1 resulted in 960 unique features for each day of each patient’s stay. Details of the MIMIC preprocessing, MIMIC hold-out, and eICU datasets are shown in Table 1 . All the datasets are relatively equally balanced, however, the specific antibiotic utilisation distribution varies between MIMIC and eICU. Furthermore, eICU represents a more unwell population given the higher proportion of life-threatening infections such as sepsis.
The antibiotic spectrum index (ASI) 25 demonstrates if an antibiotic treatment regime shows broad or narrow activity. Larger values indicate a broader spectrum, while smaller values correlate with more targeted activity. A statistically significant ( p -value < 0.01, statistic 1686390, alpha 0.05, effect size 0.87) difference was found between the mean ASI for IV and oral antibiotics upon switching (8.25 and 5.89 respectively) through the Wilcoxon rank-sum test. In addition, the majority of patients (70.03%) see a decrease in their treatments ASI upon switching, with a mean decrease of 23.04% although this was highly variable (Supplementary Fig. 1) .
Feature and model optimisation
The first excessively large neural network trained on the preprocessing training subset achieved an Area Under the Receiver Operating Characteristic curve (AUROC) of 0.76 on the preprocessing test subset. SHapley Additive exPlanations (SHAP) values 26 were calculated and the top 98 features were selected for input into a genetic algorithm. The genetic algorithm produced two sets of features, one short set, containing only 5 features, and another longer set of 37. The short and long feature sets achieved an AUROC of 0.80 and 0.82 on the preprocessing test subset respectively. The final features for each set are shown in Supplementary Table. 1 with the respective Catch22 time-series transformations listed in Supplementary Table. 2 . During hyperparameter optimisation our objective was to find the most simple models whilst maintaining performance. For both feature sets, this was achieved with new less complex models being found to achieve the same AUROC. The final hyperparameters of each model are shown in Supplementary Table. 3 . Finally, alternative cutoff thresholds were explored for both models to maximise the AUROC and minimise the FPR (Supplementary Fig. 2) . This then allows for a traffic light system to be employed at deployment for simplicity and interpretability (Fig. 4 ). Youden’s index 27 which optimises the AUROC, found the 1st cutoff point of 0.54 and 0.52 for the short and long models respectively. The point where precision, recall and the F1 score were equal acted as a 2nd stringent threshold. This cutoff point was 0.74 for the short models and 0.79 for the long models, resulting in a lower AUROC (0.70 and 0.74 respectively) on the preprocessing test subset but a superior false positive rate (FPR) (0.11 and 0.09 respectively versus 0.26 and 0.22 using Youden’s threshold).
Model evaluation
The final short models trained and tested on the hold-out set obtained a mean AUROC of 0.78 (SD 0.02), FPR 0.25 (SD 0.02) with the 1st Youden’s threshold, and a mean AUROC of 0.69 (SD 0.03), FPR 0.10 (SD 0.02) with the 2nd threshold. Meanwhile, the final long models achieved a mean AUROC of 0.80 (SD 0.01), FPR 0.25 (SD 0.04) with the 1st cutoff, and a mean AUROC of 0.75 (SD 0.02), FPR 0.10 (SD 0.03) with the 2nd cutoff. Further evaluation metrics for each model and threshold can be found in Table 2 . For comparison, a baseline that utilised two clear infection markers (temperature and Early Warning Score) from the latest guidelines 14 obtained worse results with an AUROC of 0.66, accuracy of 0.61, TPR of 0.75, and FPR of 0.43. Predictions and labels broken down by IV treatment duration (Fig. 2 ) shows that the majority of incorrect predictions occurred in the middle of IV treatment days when the models predicted to switch but the real label indicated the patient continued with IV. The short model on average predicted 70% and 38% of patients could switch earlier than they did with the 1st and 2nd thresholds respectively. Arguably the long model demonstrated a more balanced profile with 51/28% early, 38/41% agreement, and 11/31% late switch predictions with the 1st and 2nd thresholds. When the difference between the real and predicted switch event was minimal, mean patient LOS outcomes were reduced (Fig. 3 ). Furthermore, a statistically significant difference (Wilcoxon rank-sum test, alpha 0.05) in remaining LOS was observed between those who received oral versus those who had IV treatment, with 2, 3, and 4 prior days of IV treatment (oral mean, IV mean, p -value, statistic and effect size of 1.03;1.70; < 0.01;555588;0.39, 0.91;1.89; < 0.01;227473;0.56 and 0.95;2.02; < 0.01;24572;0.57 respectively). No statistically significant differences were observed on the later days 5, 6, and 7 (Supplementary Fig. 3) . No mortality differences were observed due to imbalanced data (Supplementary Table. 4) .
eICU is a different dataset from MIMIC covering distinct hospitals with a separate patient population and unique data distribution. These differences can often cause problems for machine learning models but allows us to validate our features and modeling approach on an external dataset. When applied to eICU data via transfer learning a mean AUROC of 0.72 (SD 0.02), 0.65 (SD 0.05), 0.72 (SD 0.02), 0.64 (SD 0.06), and a FPR of 0.24 (SD 0.04), 0.05 (SD 0.02), 0.24 (SD 0.04) and 0.06 (SD 0.03) was obtained for the short and long models 1st and 2nd thresholds respectively (Table 2 ). Both models outperformed the eICU baseline which obtained an AUROC of 0.55, accuracy of 0.67, TPR of 0.38, and FPR 0.28.
Achieving target drug exposure against the pathogenic organism is important during antibiotic treatment and is often a concern when deciding to switch to oral administration 28 . For those patients who were on oral antibiotics with incomplete absorption a mean AUROC of 0.73 (SD 0.03), 0.67 (SD 0.05), 0.77 (SD 0.02), 0.73 (SD 0.03), and a FPR of 0.33 (SD 0.06), 0.12 (SD 0.04), 0.28 (SD 0.07) and 0.12 (SD 0.07) was achieved for the short and long models 1st and 2nd cutoffs respectively (Table 2 ).
If patients have issues with enteral absorption, oral antibiotic therapy is less likely to be suitable 14 . When tested on patients with poor absorption a mean AUROC of 0.76 (SD 0.10), 0.75 (SD 0.11), 0.75 (SD 0.07), 0.71 (SD 0.16), and a FPR of 0.48 (SD 0.20), 0.28 (SD 0.12), 0.43 (SD 0.14) and 0.12 (SD 0.12) was obtained for the short and long models 1st and 2nd thresholds respectively (Table 2 ).
Results were then examined for patients with specific infections. For urinary tract infection (UTI) patients a mean AUROC of 0.77 (SD 0.03), 0.74 (SD 0.04), 0.78 (SD 0.02), 0.77 (SD 0.04), and an FPR of 0.33 (SD 0.03), 0.15 (SD 0.03), 0.31 (SD 0.04) and 0.13 (SD 0.05) was achieved for the short and long models 1st and 2nd cutoffs respectively (Table 2 ). When tested on patients with pneumonia a mean AUROC of 0.76 (SD 0.03), 0.76 (SD 0.03), 0.77 (SD 0.02), 0.74 (SD 0.04), and a FPR of 0.35 (SD 0.03), 0.16 (SD 0.04), 0.32 (SD 0.04) and 0.14 (SD 0.04) was obtained for the short and long models 1st and 2nd thresholds respectively (Table 2 ). Finally, for sepsis patients, a mean AUROC of 0.82 (SD 0.05), 0.79 (SD 0.12), 0.77 (SD 0.07), 0.76 (SD 0.18), and a FPR of 0.36 (SD 0.10), 0.17 (SD 0.09), 0.35 (SD 0.08) and 0.16 (SD 0.07) was achieved for the short and long models 1st and 2nd cutoffs respectively (Table 2 ).
Interpretability
Two cutoff thresholds allows for a simple traffic light system to be presented to clinicians with regards to if a switch could be appropriate at a particular time. To further improve interpretability and model understanding the SimplEx 29 methodology was applied. Once fitted the decomposition for a particular patient was computed to get corpus examples, their importance, and feature contribution. This data was combined and infectious disease clinicians consulted to create informative visual representations. Figure 4 shows an example of these for short model predictions.
Fairness
Overall the models demonstrated equalised odds (EO) across the majority of sensitive attribute groups. Table 3 shows the AUROC, TPR, and FPR for both short and long models by sensitive attribute group. The short model did not obtain EO for those in the age bracket of 90, of Native American descendance, or with Medicaid insurance (Table 3 ). On the other hand, the long model only showed a discrepancy for patients in the age bracket of 30. For the short model, threshold optimisation 30 with the true positive rate (TPR) parity constraint enabled EO to be achieved for those in the age bracket of 90, while the EO constraint standardised performance across insurance groups (Supplementary Table. 5 , Supplementary Fig. 4) . No constraint enabled the model to demonstrate EO for the native group. For the long model, the FPR parity constraint caused EO to be obtained for those in the age bracket of 30 (Supplementary Table. 6 , Supplementary Fig. 4) . | Discussion
To maximise clinical utility we aimed to minimise complexity during feature selection and model development. Through the genetic algorithm, two feature sets of interest were identified. The short set utilised only 5 features but maintained performance, while the long set enabled slight improvements in the evaluation metrics. The two most important SHAP features utilised the same time series transformation (SB_MotifThree_quantile_hh) for systolic blood pressure over the whole ICU stay and heart rate over the current day respectively. This measure uses equiprobable binning to indicate the predictability of a time series. This is medically relevant to switching the administration route as clinicians look for vitals to stabilise before switching. Interestingly the 3rd and 4th SHAP ranked features represent the same type of feature (IN_AutoMutualInfoStats_40_gaussian_fmmi calculated over the whole of the ICU stay) for two different clinical parameters, respiratory rate, and the mean blood pressure. Furthermore, their feature values (shown in Supplementary Fig. 5) are very similar, indicating why having both features was likely redundant to the models. Other features in the short set also demonstrate clinical importance. For example, the first minimum of an O2 saturation pulseoximetry autocorrelation function (CO_FirstMin_ac) would indicate variable stability and hence clinical improvement or deterioration. While a value indicating the importance of low frequencies in the Glasgow Coma Scale (GCS) motor response (SP_Summaries_welch_rect_area_5_1), could show if the patient is retaining consciousness or not over a long period, which is often necessary for administering oral medication. Overall these features combine to provide a comprehensive but succinct overview of the general health status of the patient which can be used to determine if switching could be appropriate.
Results on specific infections and antibiotic characteristics demonstrate the models have stable performance across numerous different patient groups. Particularly important is understanding when oral antibiotics with incomplete absorption can be used, given concerns surrounding achieving therapeutic concentrations. Our long model achieved an AUROC of 0.77 (SD 0.02) in this subpopulation. Furthermore, in conditions such as sepsis where patients are critically-ill for prolonged periods and fewer oral therapies are utilised, our short model obtained an AUROC of 0.82 (SD 0.02). Indicating that such a support system could be utilised in severe infections. Transfer learning results on the eICU dataset were stringent with regards to predicting when switching could be appropriate (Supplementary Fig. 3) , this is to be expected considering the patients in eICU are on average more severely unwell than in MIMIC (Table 1 ). Switching administration route is influenced by many behavioural factors that are not easily modeled. Given eICU contains data from many different hospitals the prescribing behaviour with regards to oral switching is likely much more heterogeneous than in MIMIC whose data is from a single institution. As such, the eICU model is having to approximate many different behaviours, which results in varying performance across institutions (Supplementary Fig. 6) , and likely causes it to be more stringent with regard to predicting a switch to optimise performance. Similar behavior is observed with the baseline eICU results which confirms predicting the route of administration is a more challenging task in eICU when compared to MIMIC (AUROC of 0.66 and 0.55 respectively). Further research into subpopulations and other datasets could identify unfavourable IV-to-oral switch characteristics, such as individuals with abnormal pharmacokinetics or immunosuppression. Specific thresholding or separate models 31 could then ensure patients with such attributes require a larger output to be flagged as suitable for switching. Combining this with alternative thresholds to ensure fairness though can very quickly make CDSSs excessively complex, leading to misunderstanding, misuse, and reluctant adoption 17 , 20 , 32 . We believe this research strikes a practical balance between performance and usefulness for IV-to-oral switch decision support. Overall the results demonstrate our methods and models are generalisable as similar performance was obtained across all MIMIC tests with different patient populations, and between two distinct ICU datasets indicating the feature sets identified are informative and that the selected hyperparameters can model the underlying data.
Overall the models demonstrated reasonably fair performance across all sensitive attribute groups. When equalised odds were not achieved, threshold optimisation 30 was able to improve the results for a given group in all cases, except for that of the Native American group. This population was the most underrepresented within the data with an average of only 11 patients in the test set, highlighting the need for further good quality real or synthetic data on minority populations. When threshold optimisation was undertaken a trade-off between groups in a sensitive attribute class was sometimes observed. For example, the TPR parity constraint on the short model achieved EO for those in the age bracket of 90. However, it caused the FPR of those in the minority group around 20 years old to increase from 0.29 to 0.61 (Supplementary Table. 5) . This loss in performance for the 20-year-old group was also partially seen for the FPR parity constraint on the long model (Supplementary Table. 6) . This shows the importance of balance when considering if a model is defined as fair or not, in particular for drastically different patient populations, such as 90 versus 20-year-olds. Prioritising one group or sensitive attribute can hinder model performance in others. As such honest and decent precautions and analysis are needed to ensure algorithms are equal and reasonable without discrimination. Moreover, for antibiotic decision making further ethical considerations need to be taken into account including the effect on other individuals outside of the patient being treated 33 . We believe this analysis demonstrates such CDSSs can be fair; however, further validation is certainly required.
Two feature sets were used in this research to evaluate the trade-off between simplicity and explainability vs performance which has been widely discussed in the machine learning literature 34 . Overall results show that the long model often demonstrates slightly superior performance to the short model. However, it is inherently more complex and in some scenarios such as in those with sepsis, it performs worse than the short model. Further research including understanding clinicians opinions is required to determine what model is most appropriate in specific circumstances. Alternative cutoff thresholds were also investigated for our binary classification task to maximise the AUROC and minimise the FPR. Results show that this was achieved for both the short and long models by fixing the thresholds from the preprocessing validation subset. With the 1st threshold achieving a reasonable AUROC and the 2nd threshold having a lower FPR, although as expected this comes at the expense of a worse AUROC score. We envisage such thresholds being utilised similar to a traffic light, whereby suggestions can be split into don’t, potentially, or do switch based on the model’s level of confidence (Fig. 4 ). This type of structure is simple, familiar to individuals and should ensure along with interpretability methods that such a model acts as an appropriate CDSS and allows for the end user to understand the output alongside other information in order to make the final decision.
Explainability and interpretability are critical aspects of using machine learning models in the real world 35 , 36 . To ensure our model and its outputs could be understood and interrogated SimplEx 29 was utilised and visual representations created (Fig. 4 ). These visual summaries include a number of aspects that were noted as important for understanding by clinical colleagues. Firstly textual descriptions enable key information to be conveyed quickly and reduce the barrier to adoption through universal understanding. Secondly, related patient examples are shown and scored. Clinicians rely heavily on prior experience when undertaking antibiotic treatment decisions 37 ; as such, showing historical examples and how they compare to the current patient of interest is perceived as appropriate. In conjunction, highlighting whether the model was correct on previous examples at each threshold provides some level of reassurance on how well the model performs on this type of patient and therefore if the predictions should be trusted or not. Finally, patient-specific feature contribution can be shown to illustrate how the model arrived at that conclusion. Figure 4 shows that while in many cases a clear switch decision is apparent, inherently some days (e.g., day 3) and patients present a particularly complex case. This reflects what is often seen in reality with decisions regarding antimicrobial switching not being clear-cut. By incorporating interpretability methods, models such as those developed in this research can become clinically useful CDSSs.
The objective of a CDSS to support IV-to-oral switch decision making is to facilitate antimicrobial stewardship. ASI results are in-line with current literature indicating frequent oral prescribing may use less broad spectrum IV antibiotics overall and therefore could be beneficial from an AMR and HCAIs perspective 12 , 38 . As such, this evidence supports the drive to maximise the use of oral therapies and alongside limited adoption 15 , 16 highlights why a switch focused CDSS may be useful. It is however notoriously difficult to discern the value of predictions from a CDSS. A retrospective analysis was conducted to understand how such switch models may benefit healthcare institutions and patients. Figure 2 shows for the first two days upon starting IV treatment our models predict that the majority of patients should not switch which corresponds with the true labels. This is in line with the latest UK guidelines whereby the IV-to-oral switch should be considered daily after 48 hours 14 . For dates with 2 to 7 prior days of IV treatment though there develops a disconnect between the labels and model predictions. This is particularly apparent for the short model and the first more lenient threshold. Model outputs indicate that by day 4 almost all patients could be suitable for switching to oral administration from a clinical parameter, health status perspective. For some patients, there will be risk factors beyond the models input features that the clinician considered meaning they did not switch, but for others, the clinician may have been unaware or neglected the decision meaning switching earlier may have been suitable. Furthermore, results show that LOS is minimised when predictions and the true labels align, and upon switching patients usually see prompt discharge. Our models may therefore be able to provide useful decision support by raising awareness of when switching could be suitable for a particular patient. Given this decision is often neglected and postponed, such a CDSS may be able to promote switching when appropriate which could potentially support efforts to stop AMR, prevent HCAIs, and benefit patients.
To improve the clinical applicability of our solution a number of logic-based rules could be implemented. For example, if a patient has a certain type of infection, malabsorption, immunosuppression, has recently vomited, or could have compliant issues, an overriding rule based on the latest guidelines 14 could suggest not to switch. Furthermore, the number of days of IV treatment should be highlighted alongside conditions, such as sepsis, in which extra care should be taken, as these factors influence switch decision making. If a patient is receiving an IV antibiotic and there is a similar oral version available this could be flagged alongside model outputs as a ‘simple’ switch. Moreover, given the potential comfort, workload, and discharge benefits when patients have no IV catheters, CDSSs should consider the wider patient treatment paradigm, and potentially further encourage switching when IV access is only for antibiotic treatment. Finally, to improve practice it is important for clinicians to document when a switch occurred and why that decision was made. This ensures in the future such individualised antibiotic decision making can be data-driven based on real evidence, rather than decided by habit or general population evidence. By combining machine learning approaches with clinical logic we can ensure patient safety while driving a positive change in antimicrobial utilisation. In the future we will conduct further research on how such solutions could be combined and implemented in real-time to create a complete CDSS for antibiotic optimisation, that is well received by the clinical community and provides novel, useful information.
There are limitations to this research study. Firstly, the use of historical patient data means that all of our models predictions are based on historical prescribing practices. Due to concerns surrounding AMR, there has been a large amount of research into antibiotic prescribing over recent years 17 , 39 – 42 and hence it is plausible our models switch suggestions are ‘out of date’. Secondly, our model only analyses a snapshot of the patient and not all the factors that are clinically used to assess a patient’s suitability for switching 14 . As discussed in the methods, this is due to data challenges, but incorporating additional criteria into the model so that under certain circumstances a switch suggestion cannot be given is an avenue for future work. However, we believe by anayzing and summarising multiple variables regarding the patients clinical and infection status such a system could support switch decision making with the final decision always made by the clinician. Finally, the current work presented only evaluates such models on US based ICU data. How such a system could perform in other medical settings and health-systems such as infectious diseases wards, the UK’s NHS and low and middle income countries remains an outstanding question. But given the results presented and the routine, standardised nature of the raw input data we believe our approach is generalisable and there is potential to translate this research into other non-ICU medical settings where oral therapy may be more commonly utilised.
In summary, we have identified clinically relevant features and developed simple, fair, interpretable, and generalisable models to estimate when a patient could switch from IV-to-oral antibiotic treatment. In the future, this research will require further analysis and prospective evaluation to understand its safety, clinical benefit, and how it can influence antimicrobial decision making. But given AMR, HCAIs, and the interest in promoting oral therapies, such a system holds great promise to provide clinically useful antimicrobial decision support. | Antimicrobial resistance (AMR) and healthcare associated infections pose a significant threat globally. One key prevention strategy is to follow antimicrobial stewardship practices, in particular, to maximise targeted oral therapy and reduce the use of indwelling vascular devices for intravenous (IV) administration. Appreciating when an individual patient can switch from IV to oral antibiotic treatment is often non-trivial and not standardised. To tackle this problem we created a machine learning model to predict when a patient could switch based on routinely collected clinical parameters. 10,362 unique intensive care unit stays were extracted and two informative feature sets identified. Our best model achieved a mean AUROC of 0.80 (SD 0.01) on the hold-out set while not being biased to individuals protected characteristics. Interpretability methodologies were employed to create clinically useful visual explanations. In summary, our model provides individualised, fair, and interpretable predictions for when a patient could switch from IV-to-oral antibiotic treatment. Prospectively evaluation of safety and efficacy is needed before such technology can be applied clinically.
The decision to switch patients from intravenous to oral antibiotic therapy is important for the individual and wider society. Here, authors show a machine learning model using routine clinical data can predict when a patient could switch.
Subject terms | Supplementary information
Source data
| Supplementary information
The online version contains supplementary material available at 10.1038/s41467-024-44740-2.
Acknowledgements
William Bolton was supported by the UKRI CDT in AI for Healthcare http://ai4health.io (Grant No. P/S023283/1) and by the National Institute for Health, Health Protection Research Unit in Healthcare Associated Infections and Antimicrobial Resistance at Imperial College London in partnership with the UK Health Security Agency (previously PHE), in collaboration with, Imperial Healthcare Partners, University of Cambridge and University of Warwick. He is also affiliated to the Department of Health and Social Care, Centre for Antimicrobial Optimisation. The authors would like to acknowledge (1) the National Institute for Health Research Health Protection Research Unit (NIHR HPRU) in Healthcare Associated Infection and Antimicrobial Resistance at Imperial College London and (2) The Department for Health and Social Care funded Centre for Antimicrobial Optimisation (CAMO) at Imperial College London. This study is independent research partly funded by the National Institute for Health Research. The views expressed in this publication are those of the authors and not necessarily those of the NHS, the National Institute for Health Research the Department of Health and Social Care or the UK Health Security Agency.
Author contributions
W.B., R.W., and T.M.R. contributed to study concept and design. W.B. contributed to data acquisition and analysis. W.B. and T.M.R. contributed to the manuscript drafting, discussion of the results, and review of the data. All authors contributed to data interpretation, as well as final revisions of the manuscript. All authors had full access to all the data in the study and had final responsibility for the decision to submit for publication.
Peer review
Peer review information
Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. A peer review file is available.
Data availability
Publicly available datasets were analyzed in this study. The MIMIC-IV dataset can be found at https://physionet.org/content/mimiciv/2.0/ and the eICU dataset at https://physionet.org/content/eicu-crd/2.0/ . Both are accessible once you are a credentialed user on physionet, have completed the required training and signed the appropriate data use agreement. Specific additional data can be provided upon request to the authors, provided that it is in line with the datasets data use and ethical regulations. Source data are provided with this paper.
Code availability
The computer code used in this research is available at https://github.com/WilliamBolton/iv_to_oral 55 .
Competing interests
Author T.M.R. was employed by Sandoz (2020), Roche Diagnostics Ltd (2021), and bioMerieux (2021–2022). These commercial entities were not involved in the study design, collection, analysis, interpretation of data, the writing of this article or the decision to submit it for publication. All authors declare no other competing interests. | CC BY | no | 2024-01-15 23:41:59 | Nat Commun. 2024 Jan 13; 15:506 | oa_package/6c/b5/PMC10787786.tar.gz |
|
PMC10787787 | 38218745 | Introduction
In recent years, intelligent transportation have become increasingly complex due to rapid urbanization and population growth. There is a growing need for abnormal event detection in transportation networks. The Shanghai Bund trampling incident that occurred on December 31, 2014, in China is a widely known tragedy closely associated with traffic anomaly detection 1 . Furthermore, on January 26, 2017, in Harbin, the largest city in northeastern China, a single traffic incident resulted in a chain of rear-end collisions, leading to eight fatalities and thirty-two injuries 2 . The above events show that early detection and prediction of anomalies before they occur are of significant value in preventing serious incidents. Therefore, an efficient and accurate anomaly detection system holds significant research value as it enables continuous monitoring of specific indicators and effective prevention of potential anomalies.
Anomaly detection is widely used in smart cities, especially in intelligent transportation systems. The intelligent transportation system discussed in this paper is an artificial intelligence-based technology that aims to detect intelligent traffic anomalies. By learning the relationships between sensors, we could detect anomalies from sensors data 3 – 5 . However, traffic anomalies usually exhibit complex forms due to two aspects: high dimensionality, sparsity, abnormal scarcity (i.e., the need to correlate time and space, including speed or flow), and difficulty in capturing the hidden relationship between nodes (i.e., spatial modeling in the face of different data sources with varying degrees of anomalies in density or distribution and scale) 6 , 7 . Therefore, it is important to explore ways to capture complex inter-sensor relationships and detect anomalies from node relationships. Several methods are based on Generative Adversarial Networks (GANs) based method 3 . However, the generator of GANs may be ineffective in fully capturing the hidden distribution of the data, which leads to a high false alarm rate and miss alarm rate due to the combination of the Binary CrossEntropyLoss (BCE) loss function. Most previous methods for anomaly detection are variants of Long Short Term Memory (LSTM) 8 , 9 , such as FC-LSTM 10 , which focuses on capturing both the various static factors and dynamic interactions that affect traffic flow. Moreover, there exists a category of networks, such as TCNs (temporal convolutional networks), designed to address temporal dependencies, which can capture global temporal information 11 . However, TCNs may not be as flexible in the context of traffic timing due to variations in the amount of historical information needed for model predictions across different domains. When TCNs (temporal convolutional networks) face a dynamic transportation network, their performance may be poor because their perceptual field is not large enough to describe the dynamics, complexity, and capture the global contextual information 12 .
The most advanced approach employs a graphical convolutional neural network (GNN) for spatial modeling reuse and combines LSTM to deal with anomaly prediction in time series 3 . There is also a method of passing adversarial training, learning the spatiotemporal features of traffic dynamics and traffic anomalies, respectively 13 . Existing methods of anomaly detection using graph convolutional neural networks (GCNs) do not well-address data sparsity and capture unseen nodes and the spatiotemporal correlation between nodes in the traffic network.
To solve the above problem, we propose a mirror temporal convolutional module (MTCM) to capture the anomalous information related to input data and hidden dynamic nodes in traffic networks. We mainly design two modules in MTGAE: mirror temporal convolutional module (MTCM) and graph convolutional gate recurrent unit cell (GCGRU CELL). Combined with self-adaptive, the MTCM can efficiently input into its modules in the face of sections of varying lengths in the dataset. MTCM explores the potential association between nodes and nodes by learning the complex hidden relationships and dependencies between nodes in traffic networks. GCGRU CELL module fully uses the existing prior knowledge (historical data). It captures road information, hidden node relationships, and dependencies for anomaly information redistribution, thus allowing us to obtain anomaly information more easily. We summarize the contributions of this paper as follows: We propose an anomalous detection framework called MTGAE, which maximizes the exploration of possible anomalies between nodes in the complex interdependencies, and better captures the hidden features between node-to-node in the traffic network. We construct a mirror temporal convolutional module, which is self-adapt to dataset and captures and cascades the hidden information between nodes by maximizing the breakthrough of the perceptual field of view of TCN. We propose the GCGRU CELL module, which captures long-term and short-term dependent anomalies in the traffic network space-time and maximizes the extraction of spatiotemporal features and possible anomaly information by cooperating with MTCM. | Methodology
Although many traffic anomaly detection methods have achieved optimal performance, they often overlook the hidden relationships between nodes during the detection process. For instance, traffic congestion during peak periods upstream can impact downstream traffic. This oversight results in many models lacking the ability to capture long-term temporal correlations, spatial characteristics, and high periodic trends. To address this, we aim to identify abnormal information and potential anomalies in the complex interdependencies among nodes in traffic networks. Consequently, we propose a traffic anomaly detection framework, MTGAE, with node interaction (see Fig. 2 ).
MTGAE consists of two main modules: MTCM and GCGRU CELL. The original input first passed through an adaptive process. This allows our module to better self-adapt to existing datasets by converting graph signals in low-dimensional spaces into potential vectors in high-dimensional spaces. Then we construct MTCM and GCGRU CELL. Specifically, we built MTCM to expand the hidden information in spacetime. MTCM internally expands x to the latent variables by mirror flip, and increases dilation factors and generates the hidden states H to capture both long-term spatiotemporal complex dependencies combining with TCN. Meanwhile, we built GCGRU CELL module to capture long-term and short-term 84 dependent anomalies in the traffic network. It combines original inputs and the hidden spatiotemporal states H as prior information. We first redistribute it through the Gaussian kernel module but without changing the overall structure of the traffic network (see in Fig. 2 ), then combine with our GCN modules to extract more spatiotemporal information. Subsequently, based on the output of the first GCGRU CELL, the spatio-temporal information and MTCM’s hidden information H , the second GCGRU CELL module adds more hidden details to correct the defects generated. Finally, we link the reconstructed results with the loss function to determine whether there are anomalies. In this section, we introduce the details of the MTGAE.
Problem definition
In this paper, traffic anomaly is monitored and detected in discrete time series . We denote the adjacency matrix representation graph as where V indicates different nodes, such as two nodes and , E denotes the set of edges between two nodes and W is the weighted adjacency matrix. A larger weight between two nodes means they are closer in the road networks and vice versa (see Fig. 1 ). Given , we aim to find the abnormal event in the graph G that disrupts the regular traffic operation.
We aim to find the event in the graph G that disrupts the regular traffic operation. We get the hidden state through a specially designed contextual encoder, embed the information as a coded low-dimensional embedding, and then decode it to derive the average reconstruction error that minimizes the weighted adjacency matrix. It should be noted that our model is specifically trained using data representing normal traffic conditions. Consequently, when an anomaly occurs in the traffic operation, it deviates significantly from this ’normal’ baseline. This deviation is captured as a high reconstruction error by our model, effectively indicating the presence of an anomaly.
Encoder
Our encoder process comprises three steps: the adaptive process, the mirror temporal convolutional module (MTCM), and the graph convolutional neural network recurrent cell (GCGRU CELL). Initially, the original data, denoted as x, passes through the adaptive process, and MTCM is constructed to capture the evolving states that are not visible in the spacetime continuum among the road network nodes over time. In the GCGRU CELL, based on prior knowledge of the hidden states H from MTCM, our GCN layer, through the Gaussian kernel module, explores potential anomalies in the complex interdependencies between nodes. The encoder is trained to learn up to 24 hours in a day and 7 days in a week, facilitating interaction between the GCGRU CELL and a full connection (refer to Fig. 2 ). Finally, the graph embedding is applied.
Mirror temporal convolutional module (MTCM)
Inspired by TCN 11 (see Fig. 3 a), we proposed a superior module named MTCM that wides application in traffic prediction. Although TCNs can use the extended convolution to expand the perceptual field, they are weaker than advanced networks (e.g., Transformer) which can use correlation information of arbitrary length. Moreover, TCNs need strong adaptability to different historical information, which may have uneven predictive power and perceptual field. To overcome the above situations, we adapt the TCN before transmitting the traffic network features to reduce the fluctuation of different historical information on the ability of the TCN. We then perform a mirror flip to further preserve the features and capture the complex hidden relationships and dependencies between nodes in the traffic network. This explores the potential associations between nodes. Furthermore, thanks to the one-dimensional convolution of the TCN, we can keep the output sequence consistent with the original input in length. Finally, this output sequence will be passed as the subsequent hidden state H . More formally, for a 1-D sequence input and a filter , k is the kernel size (the kernel size in the Fig. 3 is 2), and d is the causal factor (see Fig. 3 a). The dilated convolution operation f on element x of the sequence is defined as: where is the sequence input in mirror flipping, denotes concatenate. This further increases the range of perceptual field and prevents more historical data from being lost in the process of inflated convolution.
GCGRU CELL
It mainly includes the Gaussian kernel module and GCN layer. We did not adopt GRU model (as shown in Fig. 4 ) but construct the GCN model inspired by GRU after the Gaussian kernel module. In GCGRU CELL, we replace the original gated cyclic unit of GRU to our GCN, which has the following two significant: the reset gate helps to capture short-term dependencies in the sequence, and the update gate helps to capture long-term dependencies in the sequence. This effectively predicts both long-term and short-term traffic network cycles, and combine with Gaussian kernel processing and prior knowledge H (hidden information of MTCM), GCN can capture anomaly information and possible anomalies in complex interdependencies among nodes while predicting. Unlike image data, Graph convolution is an essential operation to extract a node’s features. Figure 3 b gives examples of an origin node (orange node) to take the average value of the node features within its neighbours (white nodes in ellipse).
(1) Gaussian kernel module To further enhance the anomaly detection capability of our module, we employ Gaussian kernel function. It could maintain the ability of high-dimensional data distribution characteristics, which is crucial for traffic network anomaly detection. Specifically, Gaussian kernels facilitate the mapping of data from its original space to a higher-dimensional feature space where complex traffic network patterns and potential anomalies are more easily identified and processed. Moreover, Gaussian kernel exhibit the stability: It could manage minor fluctuations by adjusting learned scale parameter (see Eq. 2 ) or utilizing a minimax strategy 49 , thereby ensuring more stable anomaly detection results. In summary, embedding Gaussian kernels in the GCGRU CELL module aims to enhance the model’s performance and accuracy in detecting anomalies within complex traffic networks. Experimental data demonstrate that using Gaussian kernels to alter the data distribution effectively improves the accuracy of traffic anomaly detection (see Table 3 ). Building on this foundation, we further explored the anomaly detection capabilities of the Gaussian kernel module. As depicted in Fig. 5 , we performed an intermediate variable exploration of the eight feature points generated by 490 edges entering the GCGRU CELL. This demonstrates the stability and the data mapping capability of our module by conducting visualization operations on intermediate variables before and after integrating the Gaussian kernel module into the GCGRU cell. Throughout the experimentation, the overall structure of the data remains unchanged, ensuring consistency and reliability. Our GCGRU CELL receives two input modes. The first input is from the original input x after adaptation and receives the hidden information H from the MTCM. Then set the input as . The second input is the output of the first GCGRU CELL , which also receives hidden information H . We also set this input as . Then, the formula calculated by the Gaussian kernel module is as follows: where is generated based on the learned scale (we usually set the value between 0.5 and 1) and the i -th element corresponds to the i -th time point. Specifically, for the i -th time point, its association weight to the -th point is calculated by the Gaussian kernel.
(2) GCN layer Generally, the traffic network is presented as a weighted digraph. Traditional graph convolution networks only operate on adjacent nodes, which results in better short-term prediction than long-term prediction. Therefore, the spectral graph theory is used in this paper. Let and establish spectral matrix , where I is the identity matrix and is the degree matrix, is the adjacent matrix. To explore deeper and more complex traffic networks, we extend the graph convolution network to a higher level and divide the traffic graph g ( x ) sent by the Gaussian kernel module into subgraph , and the subgraph considers its neighbour nodes , which achieves more high-order information aggregation. where represents learnable weights, and denotes the computed results of graph convolution as time t increases.
In a separate aspect, the use of GRU 49 simplifies the model, reducing complexity and enabling a faster, more effective characterization of sentence semantics. Compared to LSTM, GRU reduces the number of gating parameters, utilizes fewer training parameters, requires less memory, and offers faster execution and training. Owing to these advantages, our model adopts the GRU architecture over the traditional LSTM approach. We have transformed the gating unit into a graph convolution layer, as outlined in Eq. ( 3 ). This adaptation allows the GRU architecture to imitate the gating unit effectively. Consequently, the GCN layer can discern more hidden states from data processed by the Gaussian kernel module, capturing the dynamic spatial correlations within the traffic network and identifying previously unseen network connections. Formally, where is the previous memory state, , and , , U are the weight parameters, is the current feature input, and is a sigmoid activation function. We combine GCN and GRU to capture the long-term dependencies between nodes in the graph.
(3) Graph Embedding (GE) We construct a time embedding (referred to as the GE module in Fig. 2 ) after the second GCGRU CELL to effectively capture the intricate weekly and hourly periodicity inherent in the mobility data. The time embedding consists of two components: represents the time of day embedding, and represents the day of week embedding. For example, at a specific time t (e.g., 13:00 on Saturday, July 30), we use (i.e., 13:00) and (i.e., Saturday) as the time embeddings. These embeddings serve the purpose of incorporating additional temporal information as context for the conditioned encoder and decoder. By incorporating these temporal factors as graph embeddings , the model could accurately capture and represent the patterns and variations in mobility data associated with different times and days. where is the graph embedding at time t , from the formula 4 and is weight matrix.
Decoder
In the decoder, we begin with information extraction about the node embedding from the graph embedding . For each pair of node embeddings , we embed the time information into the information of each pair of nodes and compute the corresponding weight in the weighted adjacency matrix. We then combine these node embeddings and time embeddings to form a graph embedding information that varies over time t . It contains both the information of the nodes and the time information (that is, the embedding includes the collective features of all nodes in the graph at that moment t ). Subsequently, a fully connected layer is used to process this graph embedding, to recover useful vector representations from it. After processing by the fully connected layer, the vectors i and j , corresponding to and , are unstacked to recover the embedding of each individual node at time t . Consequently, the outcome of this process is the embedding representation of a particular node n under specific time t conditions. Finally, to obtain the reconstructed edge weights, we first used the ReLU activation function to process the graph embeddings, resulting in a feature vector that has undergone a nonlinear transformation. Then, the reconstructed edge weights are obtained from the feature vector and the Sigmoid function.
The presence of a bilinear module in the decoder is significant. The bilinear module applies a transformation to the incoming data, serving two main benefits: 1) The bilinear module ensures that edge weight predictions consider directionality. In the directed graph, the edge weight from node i to node j could differ from the weight from node j to node i. 2) The bilinear module employs the formula to calculate the edge weights, where A is a learned parameter. This approach enables the model to distinguish edge weights based on direction, more accurately depicting directed graph relationships. where is weight matrix, is the weight matrix of feature vector and is the weight matrix of . The Sigmoid ensure the output .
Loss function
We use the mean squared error (MSE) as the loss function, a measure of the difference between the actual value y and the predict , to evaluate our model. Formally: where i is the value of each point in the sequence. And the reconstructed weights are and the actual weights are . During testing, the loss function Eq. ( 7 ) for each testing instance is used as its anomaly score. | Result and analysis
(1) Comparison with state-of-the-art work Initially, we compared our MTGAE model with some baseline models using the AUC as evaluation metric. The calculation of AUC considers both the classification ability of the classifier for positive and negative cases, which can still make a reasonable evaluation of the classifier in the case of sample imbalance. We fixed and in the pollution magnitude and used the anomaly rate to compare the model’s ability to detect anomalies. It can be seen from Table 1 that our MTGAE is significantly better than the other models. Our model outperforms others by about 0.1–0.4 at different anomaly rates.
After that, we fixed the time slice of pollution to study the abnormal magnitude to change the pollution magnitude differently. As shown in Table 2 , we controlled as 25% and 50% respectively, and was controlled as the same pollution magnitude under . We can see that our models are higher than the baseline but in the higher . For example, the AUC of most models with and is above 0.9, and most of the baseline models with are not performing well. Instead, they all gradually increase in AUC capacity after increases, while our model has an excellent performance in all aspects, so it is a more competitive model.
(2) Ablation study In the ablation study, we evaluated several MTGAE variants to evaluate the effects of different parts of our MTGAE (see Table 3 ). The variants include: (i) MTGAE-gan: We used the framework of GAN instead of an autoencoder. (ii) MTGAE-ot: We adopted the approach of using an autoencoder and only employed the original TCN instead of our proposed MTCM. (iii) MTGAE-mt: We removed TCN and incorporated our proposed MTCM module. (iv) MTGAE-lstm: We removed the GRU and replaced it with the LSTM. (v) MTGAE-grumt: We removed the LSTM and replaced it with the GCGRU CELL. (vi) MTGAE-Transformer: We removed the GCGRU CELL and replaced it with the Transformer. (vii) MTGAE-gb: We incorporated the Gaussian kernel module into MTCM and GCGRU CELL, placing it in the final data processing stage of the GCGRU CELL. (viii) MTGAE: Our complete model framework. The study indicates that the basic TCN variant underperforms unless combined with the Gaussian kernel function for processing, which illustrates the importance of MTCM and GCGRU CELL for anomaly detection. Notably, incorporating a mirror into TCN significantly improves its efficacy in enhancing GCGRU CELL performance, this demonstrates superior ability in capturing both long and short-term memory and temporal information in time series.
(3) Real world reflects abnormal traffic We used the NYC dataset from January 1, 2019, to January 7, 2019, to test the real-world traffic situation to prove the effectiveness of our model. We used the reconstruction loss to represent the possibility of anomalies, as shown in Fig. 6 . January 4 is Friday in the real world, and we can see that the possibility of anomalies in the afternoon distribution of this day is very intensive, from which we can infer that Black Friday Shopping is prone to traffic anomalies due to traffic jams.
(4) Sensitivity analysis To study how MTGAE varies for weekly, hourly, and node embedding, we put and . We explored the model’s affectivity on spacetime, and we changed the dimension of node embedding to 25 to 200 (the dimensionality is acceptable for the first GCN and the second GCN) and the week and hour dimension of temporal embedding to 10 to 200 for training. As shown in Fig. 6 b, our model does not change much, and the AUCs all remain between 0.9 and 1, indicating that our model works well in most environments. Moreover, we can further see that the AUC of our model is lower when the time node embedding is large than when the embedding is small.
(5) Generalization ability To explore the generalization ability of MTGAE, we performed experiments on a large-scale dynamic graph dataset DGraphFin in the financial domain 60 . It contains over 3.7 million nodes and 4.3 million dynamic edges. Nodes represent financial loan users, and directed edges represent emergency contact relationships. Each dimension represents 17 different elements of personal profiles, such as age and gender. Among the nodes in the dataset, 15,509 are categorized as fraudsters, 1,210,092 as normal users, and the remaining 66.8% of nodes (2,474,949 nodes) are registered users who have not borrowed from the platform. Based on the officially published baseline and code, we input the DGraphFin data into our MTGAE, then carry out feature learning through the 17 features in the structure of MTGAE, and finally divide into two categories (other baselines also divide into two categories) for anomaly detection, with results shown in the Table 4 . Compered with the network specifically designed for DGraphFin dataset, experiments results illustrates that our MTGAE possesses certain generalization capabilities. | Conclusions
This paper proposes MTGAN, a spatio-temporal anomaly detection framework for traffic. In the encoder, we propose two modules: the Mirror TCN (MTCM) and a variant of GCGRU, namely the GCGRU CELL that captures correlations in spatial and temporal dimensions, and a practical approach: adaptive TCN. We then performed anomaly injection on the dataset by three contamination metrics and tested it on the NCY dataset. Experiment results show that our framework outperforms the baseline in traffic anomaly detection, particularly in aspects of sparsity and high dimensionality, thereby contributing to further research. In future work, we will explore additional extensions of MTGAE in more datasets and further explore methods for learning dynamic spatial correlations. | Traffic time series anomaly detection has been intensively studied for years because of its potential applications in intelligent transportation. However, classical traffic anomaly detection methods often overlook the evolving dynamic associations between road network nodes, which leads to challenges in capturing the long-term temporal correlations, spatial characteristics, and abnormal node behaviors in datasets with high periodicity and trends, such as morning peak travel periods. In this paper, we propose a mirror temporal graph autoencoder (MTGAE) framework to explore anomalies and capture unseen nodes and the spatiotemporal correlation between nodes in the traffic network. Specifically, we propose the mirror temporal convolutional module to enhance feature extraction capabilities and capture hidden node-to-node features in the traffic network. Morever, we propose the graph convolutional gate recurrent unit cell (GCGRU CELL) module. This module uses Gaussian kernel functions to map data into a high-dimensional space, and enables the identification of anomalous information and potential anomalies within the complex interdependencies of the traffic network, based on prior knowledge and input data. We compared our work with several other advanced deep-learning anomaly detection models. Experimental results on the NYC dataset illustrate that our model works best compared to other models for traffic anomaly detection.
Subject terms | Related work
In this section, we introduce the graph convolution networks, temporal convolutional networks, and autoencoder-based anomaly detection.
Graph convolution networks
Recently, Graph Neural Network (GNN) variants, such as Graph Convolutional Networks (GCN), have demonstrated ground-breaking performances on many deep-learning tasks. In addition, it is modular, scalable, stronger in generalization ability, and explores insights that direct further research 14 . GCN captures the complex dependencies of node embeddings through information across vertices 15 . Due to these powerful features, in variants of GCN, the sensors on the road of the traffic network are considered nodes in intelligent transportation, and each node’s traffic speed or flow rate is regarded as a dynamic input feature. Among them, the Graph attention network (GAT) updates the node features through a pairwise function between the nodes with learnable weights 16 . However, it only computes one restricted form of static attention. To address this limitation, GATv2 17 introduces dynamic attention alongside static attention, allowing for more dynamic and adaptive computation of graph attention. In the subsequent development of the GCN, CorrSTN 18 effectively incorporates correlation information into the spatial structure. PDFormer 19 captures both short-range and long-range spatial dependencies by utilizing various graph masking, which enables the learning of dynamic urban traffic patterns and overcomes the restriction of modeling spatial dependencies statically. Moreover, STAEformer 20 takes into account the intrinsic spatial-temporal relationships and temporal ordering information in traffic time series. These methods are widely used in traffic forecasting, while graph embedding for traffic anomaly detection is less studied. For example, ST-Decompn solves the legal problem caused by changes in location and time in traffic cities through decomposition, as well as anomalies that may show up differently in the face of different datasets 21 , ConGAE detects traffic anomalies using semi-supervised frameworks such as autoencoders only for OD (origin-destination pairs) datasets on data washing and high dimensionality 13 . Besides, Graph Convolutional Adversarial Network (STGAN) uses adversarial training, which is divided into three modules to capture different features respectively: the recent module for local, the trend module for Long-term, and the external module for other traffic dynamics and anomalies, but the unsupervised learning, like an adversarial neural network, brings instability for anomaly detection 3 . Influenced state of the art, we borrowed the graph convolutional gated recurrent unit (GCGRU) 22 simultaneously to solve the problem of Spatiotemporal characteristics of traffic anomalies. Our work is focused on the traffic anomaly prediction capabilities of GCN.
Graph autoencoders (GAEs) are a kind of unsupervised learning method, which means they map nodes to a potential vector space through an encoding process, reconstructing graph information from the vector to generate a graph similar to the original one (decoding) 15 , 23 . For example, ADN 24 is a graph autoencoder structure and achieves information diffusion through alternating spatial and temporal self-attention. Due to the power of GAE 25 , it is widely used in different research directions, such as link prediction 26 – 30 , graph clustering 31 , 32 , hyperspectral anomaly detection 33 . While the traditional GCN takes node features and adjacency matrix as input and node embedding as output, GAEs compresses the node embeddings of all nodes in a graph to a single graph embedding to obtain information about the context.
Temporal convolutional networks
Earlier research methods focus on traffic-related problems but have shown significant inaccuracies in anomaly prediction. Deep learning has gradually dominated time series prediction tasks with sophisticated data modeling capabilities and autonomous learning abilities in recent years. Most studies in the field of transportation rely on gated linear Unit (GLU) 34 , or gated recursive units (GRU) 35 to capture the dynamic temporal correlation of time series data. Moreover, based on the transformer architecture, STGM 36 introduces a novel attention mechanism to capture both long-term and short-term temporal dependencies. Temporal convolutional networks (TCNs) also have significant advantages in addressing temporal dependencies, especially in time series prediction tasks. However, most traffic flow anomaly prediction frameworks use the original Temporal Convolutional Network (TCN) 37 , 38 structure without modification, and traffic anomaly detection is still under-explored. In this study, we have enhanced the TCN to better detect anomalies within this domain, allowing for a more comprehensive analysis of time series data.
Autoencoder-based anomaly detection
The autoencoder, an unsupervised neural network, has seen significant success across various fields. This success is largely due to its superior ability to discriminate between abnormal and regular inputs, making it widely used in anomaly detection 39 – 44 . In the field of graph convolutional networks (GCN), GCN-based autoencoders are also employed for anomaly detection 45 – 48 . They are mainly studied in graph embedding, which is consistent with the direction of our work, thanks to the network structure of the graph, which can connect various points in the intricate world for anomaly detection.
Experiments
Datasets and implementation
To ensure the model’s credibility, we focused on general datasets that target traffic anomaly detection in our experiments. We verify our MTGAE method on two public traffic network datasets. PEMS-BAY dataset: It is collected in real-time from nearly 40,000 individual detectors spanning the freeway system across all major metropolitan areas of California 50 . The dataset comprises 365 sensors located in the bay area, and it contains traffic data recorded from April to May 2014. For our analysis, we selected a subgraph of six sensors, each with recorded speed and traffic flow information pertaining to our network. Furthermore, we extended the duration of each traffic incident from CHP (CHP Traffic Incident Information https://www.chp.ca.gov/traffic ), by one hour to account for the impact of traffic accidents. New York City (NYC) taxi dataset: The New York City (NYC) taxi trips dataset is publicly released by the Taxi and Limousine Commission (TLC). We use it to record the time and location of each taxi pick-up and drop-off and pool the records formed for each hour of that taxi into a matrix. This dataset includes six months of data, from January 2019 to March 2019. Since the NYC dataset lacks exception tagging points, we utilized exception injection to add exceptions into the timing of the dataset 51 , 52 .
Baselines
To validate our method’s effectiveness in anomaly detection within the NYC dataset. We obtained these methods from their official public code repositories and employed their optimal experimental setups, running all models on the NYC dataset to guarantee fairness: Con-GAE 13 : The method was developed to tackle the challenges posed by extreme data sparsity and high dimensionality, specifically to address anomalies in traffic conditions. Moreover, It utilizes context-enhanced graph autoencoders to enhance the effectiveness of anomaly detection. SuperGAT 53 : A self-supervised graph attention network, uses edge information to guide attention learning. SuperGAT analyzes two common attention forms, revealing their limitations in capturing label agreement and edge presence, and proposes enhanced attention mechanisms tailored to graph characteristics. EG 54 : The Efficient Graph Convolution (EGC) method is an isotropic Graph Neural Network (GNN) architecture.EGC outperforms comparable anisotropic models like GAT and PNA in terms of accuracy and efficiency. This finding challenges the prevalent belief that anisotropic GNNs are inherently superior. GraphGPS 55 : A modular and scalable framework designed to build graph transformers, integrating message passing with global attention. This framework also categorizes positional and structural encodings, thereby injecting useful inductive biases. GraphGPS demonstrates state-of-the-art performance in various graph learning tasks and scales effortlessly to thousands of nodes. GATv2 17 : Graph Attention Networks (GATs) are limited by their computation of restricted “static” attention, inhibiting their ability to dynamically prioritize neighbors. To overcome this limitation, GATv2 alters the order of operations in the scoring function, enabling more expressive dynamic attention. Dir-GNN 56 : The method enhances message passing neural networks (MPNNs) by incorporating edge directionality and conducting distinct aggregations for incoming and outgoing edges. Moreover, It significantly betters learning on heterophilic graphs, where neighboring nodes often have different labels, and maintains performance on homophilic graphs, characterized by label-sharing neighbors. PMLP 57 : The method introduces propagational MLPs, which employ MLP architecture for training and add message passing layers before inference. This approach bridges the gap between MLPs and GNNs, achieving performance that is comparable to or surpasses that of GNNs. It demonstrates the effectiveness of GNN architectures for generalization, even without training in a graph context. Additionally, PMLPs offer faster and more robust training than GNNs.
Experimental setups
Our experiments were conducted using a GPU 2080TI and an Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz. Considering the anomaly problem, we experimentally used anomaly injection, randomly selecting time slices in each sequence to inject anomalies. Then extract a portion of the time series corresponding uniformly distributed time slices for perturbation factors for anomaly perturbation (e.g., 10 am for 10 pm). In this experiment, we set three pollution ratios and magnitudes , , and on the data set NYC. Anomalies in traffic networks are mainly divided into two types 58 , 59 : (1) Spatial anomalies: where the current traffic conditions are inconsistent with normal traffic conditions (for example, the flow of traffic vehicles is inconsistent with the normal flow of travel in the past). (2) Temporal anomalies: where the current traffic conditions conform to the normal spatial pattern, but not to the current time. In this paper, we perform some anomaly handling on the dataset: Let represent the proportion of time slices randomly selected for contamination, which is applicable to the injection of both spatial and temporal anomalies; Let denote the proportion of origin-destination pairs selected for contamination; Let defines the range of the uniform distribution used to perturb the travel time. In fact, defines the magnitude of spatial anomalies, i.e., the maximum possible value of travel time perturbation.
In the experiments, we adjust the levels of pollution ratios and magnitudes ( , , and ), to evaluate the effectiveness of anomaly detection under different scenarios. The specific steps are as follows: For spatial anomalies, we first randomly select a certain proportion ( ) of time slices and randomly choose a certain proportion ( ) of origin-destination pairs in each contaminated time slice, and then perturb the travel time of these pairs by factors drawn from the uniform distribution U ( , ). Temporal anomalies are created by randomly selecting a certain proportion ( ) of time slices and shifting the time in the data by 12 hours (e.g., changing 8 PM to 8 AM, and vice versa). We set , , and .
For the training process, we initially set the epoch number at 150 and the batch size at 10 per epoch. In the previously mentioned day of the week and hour of the day metrics, set both and to 100, and the dimension of the graph embedding we set to 150 and 50, respectively, the discard rate was set to 0.2, the learning rate is 0.001 by default. Then, we set the learning rate decay in the process, each time, the growth is 0.1 times the last learning rate so that the model can learn the parameters better. Finally, we selected the NYC datasets from January 8 to March 31, 2019, as the training set and extracted 10% from it for validation. We used the NYC datasets from January 1 to January 7, 2019, and a portion of the Uber Movement as the test set. Note that the sampling process was based on uniform distribution random sampling, and both training set and test set were mutually exclusive (i.e., the same data point would not appear in both the training set and test set).
In addition, ablation experiments were performed on the PEMS dataset to verify the effectiveness of the proposed module, which was evaluated using MAE metrics. The loss functions that MAE and RMSE are more credible test methods in some anomaly detection, especially in the traffic area 3 . Six epochs were set for training. Each period was divided into 128 batches, the generator loss function was 500, the learning rate was 0.001, and the decayed by a factor of 0.1 per epoch. In this dataset, we set the number of layers of TCN to 9 and transformed the head nodes in GCN to GAT to improve the model’s parallelism. In learning, we set the hidden layer to 64. | Acknowledgements
This work was supported by the National Key Research and Development Program of China (2022ZD0115604) and National Natural Science Foundation of China (Grant Nos. 42130608, 42075142) and the Sichuan Science and Technology program (Grant Nos. 2023ZHCG0018, 2023NSFSC0470, 2021YFQ0053, 2022YFG0152, 23NSFSC2224, 2020JDTD0020, 2022YFG0026, 2021YFG0018, 2020YJ0241).
Author contributions
Conceptualization, X.L., Q.T., C. S. and X.W.; Data curation, Z.R.; Formal analysis, X.L. and J.P.; Investigation, Z.R., X.L. and C.S.; Methodology, Z.R.; Project administration, Z.R.; Resources, X.L. and X.W.; Software, Z.R.; Supervision, X.L. and K.C.; Validation, Z.R. and X.L.; Visualization, Z.R.; Writing—original draft, Z.R.; Writing—review & editing, X.L. All authors reviewed the manuscript.
Data availability
The datasets analyzed during the current study are available in the GitHub repository, including the NYC dataset and the PEMS dataset, which can be found at https://github.com/yuehu9/Con-GAE and https://github.com/dleyan/STGAN , respectively. And the datasets used for testing the generalization capabilities: https://dgraph.xinye.com/dataset . Additionally, we collected and analyzed some of the data used in our experiments. The experimental data collected during the current study available from the corresponding author on reasonable request.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-15 23:41:59 | Sci Rep. 2024 Jan 13; 14:1247 | oa_package/55/7b/PMC10787787.tar.gz |
Subsets and Splits